Tumgik
#80000 hours
gfsolar · 2 years
Text
Got this in the mail yesterday.
Tumblr media
The big book in front of you... Not the other stuff. The book by Toby Ord. Yes. That one.
2 notes · View notes
metastable1 · 8 months
Text
Quotes from transcript for future reference:
Mustafa Suleyman: Well, I think it’s really important, especially for this audience, to distinguish between the model itself being dangerous and the potential uses of these technologies enabling people who have bad intentions to do serious harm at scale. And they’re really fundamentally different. Because going back to your first question, the reason I said that I don’t see any evidence that we’re on a trajectory where we have to slow down capabilities development because there’s a chance of runaway intelligence explosion, or runaway recursive self-improvement, or some inherent property of the model on a standalone basis having the potential in and of itself to cause mass harm: I still don’t see that, and I stand by a decade timeframe.
[...]
Rob Wiblin: OK, so maybe the idea is in the short term, over the next couple of years, we need to worry about misuse: a model with human assistance directed to do bad things, that’s an imminent issue. Whereas a model running somewhat out of control and acting more autonomously without human support and against human efforts to control it, that is more something that we might think about in 10 years’ time and beyond. That’s your guess? Mustafa Suleyman: That’s definitely my take. That is the key distinction between misuse and autonomy. And I think that there are some capabilities which we need to track, because those capabilities increase the likelihood that that 10-year event might be sooner. For example, if models are designed to have the ability to operate autonomously by default: so as an inherent design requirement, we’re engineering the ability to go off and design its own goals, to learn to use arbitrary tools to make decisions completely independently of human oversight. And then the second capability related to that is obviously recursive self-improvement: if models are designed to update their own code, to retrain themselves, and produce fresh weights as a result of new fine-tuning data or new interaction data of any kind from their environment, be it simulated or real world. These are the kinds of capabilities that should give us pause for thought.
[...]
And at Inflection, we’re actually not working on either of those capabilities, recursive self-improvement and autonomy. I’ve chosen a product direction which I think can enable us to be extremely successful without needing to work on that. I mean, we’re not an AGI company; we’re not trying to build a superintelligence. We’re trying to build a personal AI. Now, that is going to have very capable AI-like qualities; it is going to learn from human feedback; it is going to synthesise information for you in ways that seem magical and surprising; it’s going to have a lot of access to your personal information. But I think the quest to build general-purpose learning agents which have the ability to perform well in a wide range of environments, that can operate autonomously, that can formulate their own goals, that can identify new information in environments, new reward signals, and learn to use that as self supervision to update their own weights over time: this is a completely different quality of agent, that is quite different, I think, to a personal AI product.
(Emphasis mine.) Very admirable, but that means their AI will be less general therefore less capable, therefore less useful, therefore less appealing, and economically valuable. They will be outcompeted by other companies who will pursue generality and agency.
On the open source thing: I think I’ve come out quite clearly pointing out the risks of large-scale access. I think I called it “naive open source – in 20 years’ time.” So what that means is if we just continue to open source absolutely everything for every new generation of frontier models, then it’s quite likely that we’re going to see a rapid proliferation of power. These are state-like powers which enable small groups of actors, or maybe even individuals, to have an unprecedented one-to-many impact in the world.
[...]
We’re going to see the same trajectory with respect to access to the ability to influence the world. You can think of it as related to my Modern Turing Test that I proposed around artificial capable AI: like machines that go from being evaluated on the basis of what they say — you know, the imitation test of the original Turing test — to evaluating machines on the basis of what they can do. Can they use APIs? How persuasive are they of other humans? Can they interact with other AIs to get them to do things? So if everybody gets that power, that starts to look like individuals having the power of organisations or even states. I’m talking about models that are two or three or maybe four orders of magnitude on from where we are. And we’re not far away from that. We’re going to be training models that are 1,000x larger than they currently are in the next three years. Even at Inflection, with the compute that we have, will be 100x larger than the current frontier models in the next 18 months. Although I took a lot of heat on the open source thing, I clearly wasn’t talking about today’s models: I was talking about future generations. And I still think it’s right, and I stand by that — because I think that if we don’t have that conversation, then we end up basically putting massively chaotic destabilising tools in the hands of absolutely everybody. How you do that in practise, somebody referred to it as like trying to catch rainwater or trying to stop rain by catching it in your hands. Which I think is a very good rebuttal; it’s absolutely spot on: of course this is insanely hard. I’m not saying that it’s not difficult. I’m saying that it’s the conversation that we have to be having.
(Emphasis mine) [...]
And I think that for open sourcing Llama 2, I personally don’t see that we’ve increased the existential risk to the world or any catastrophic harm to the world in a material way whatsoever. I think it’s actually good that they’re out there.
[...]
Rob Wiblin: Yeah. While you were involved with DeepMind and Google, you tried to get a broader range of people involved in decision making on AI, at least inasmuch as it affected broader society. But in the book you describe how those efforts more or less came to naught. How high a priority is solving that problem relative to the other challenges that you talk about in the book? Mustafa Suleyman: It’s a good question. I honestly spent a huge amount of my time over the 10 years that I was at DeepMind trying to put more external oversight as a core function of governance in the way that we build these technologies. And it was a pretty painful exercise. Naturally, power doesn’t want that. And although I think Google is sort of well-intentioned, it still functions as a kind of traditional bureaucracy. Unfortunately, when we set up the Google ethics board, it was really in a climate when cancel culture was at its absolute peak. And our view was that we would basically have these nine independent members that, although they didn’t have legal powers to block a technology or to investigate beyond their scope, and they were dependent on what we, as Google DeepMind, showed them, it still was a significant step to providing external oversight on sensitive technologies that we were developing. But I think some people on Twitter and elsewhere felt that because we had appointed a conservative, the president of the Heritage Foundation, and she had made some transphobic and homophobic remarks in the past, quite serious ones, that meant that she should be cancelled, and she should be withdrawn from the board. And so within a few days of announcing it, people started campaigning on university campuses to force other people to step down from the board, because their presence on the board was complicit and implied that they condoned her views and stuff like this. And I just think that was a complete travesty, and really upsetting because we’d spent two years trying to get this board going, and it was a first step towards real outside scrutiny over very sensitive technologies that were being developed. And unfortunately, it all ended within a week, as three members of the nine stood down, and then eventually she stood down, and then we lost half the board in a week and it was just completely untenable. And then the company turned around and were like, “Why are we messing around with this? This is a waste of time.” Rob Wiblin: “What a pain in the butt.” Mustafa Suleyman: “Why would we bother? What a pain in the ass.”
[...]
What wasn’t effective, I can tell you, was the obsession with superintelligence. I honestly think that did a seismic distraction — if not disservice — to the actual debate. There were many more practical things. because I think a lot of people who heard that in policy circles just thought, well, this is not for me. This is completely speculative. What do you mean, ‘recursive self-improvement’? What do you mean, ‘AGI superintelligence taking over’?” The number of people who barely have heard the phrase “AGI” but know about paperclips is just unbelievable. Completely nontechnical people would be like, “Yeah, I’ve heard about the paperclip thing. What, you think that’s likely?” Like, “Oh, geez, that is… Stop talking about paperclips!” So I think avoid that side of things: focus on misuse.
This does not speak well about the power centers of our civilization. [...]
Rob Wiblin: Yeah. From your many years in the industry, do you understand the internal politics of AI labs that have staff who range all the way from being incredibly worried about AI advances to people who just think that there’s no problem at all, and just want everything to go as quickly as possible? I would have, as an outsider, expected that these groups would end up in conflict over strategy pretty often. But at least from my vantage point, I haven’t heard about that happening very much. Things seem to run remarkably smoothly. Mustafa Suleyman: Yeah. I don’t know. I think the general view of people who really care about AI safety inside labs — like myself, and others at OpenAI, and to a large extent DeepMind too — is that the only way that you can really make progress on safety is that you actually have to be building it. Unless you are at the coalface, really experimenting with the latest capabilities, and you have resources to actually try to mitigate some of the harms that you see arising in those capabilities, then you’re always going to be playing catchup by a couple of years. I’m pretty confident that open source is going to consistently stay two to three years behind the frontier for quite a while, at least the next five years. I mean, at some point, there really will be mega multibillion-dollar training runs, but I actually think we’re farther away from that than people realise. I think people’s math is often wrong on these things. Rob Wiblin: Can you explain that? Mustafa Suleyman: People talk about us getting to a $10 billion training run. That math does not add up. We’re not getting to a single training run that costs $10 billion. I mean, that is many years away, five years away, at least. Rob Wiblin: Interesting. Is it maybe that they’re thinking that it’ll have the equivalent compute of $10 billion in 2022 chips or something like that? Is maybe that where the confusion is coming in, that they’re thinking about it in terms of the compute increase? Because they may be thinking there’s going to be a training run that involves 100 times as much compute, but by the time that happens, it doesn’t cost anywhere near 100 times as much money. Mustafa Suleyman: Well, partly it’s that. It could well be that, but then it’s not going to be 10x less: it’ll be 2-3x less, because each new generation of chip roughly gives you 2-3x more FLOPS per dollar. But yeah, I’ve heard that number bandied around, and I can’t figure out how you squeeze $10 billion worth of training into six months, unless you’re going to train for three years or something. Rob Wiblin: Yeah, that’s unlikely. Mustafa Suleyman: Yeah, it’s pretty unlikely. But in any case, I think it is super interesting that open source is so close. And it’s not just open source as a result of open sourcing frontier models like Llama 2 or Falcon or these things. It is more interesting, actually, that these models are going to get smaller and more efficient to train. So if you consider that GPT-3 was 175 billion parameters in the summer of 2020, that was like three years ago, and people are now training GPT-3-like capabilities at 1.5 billion parameters or 2 billion parameters. Which still may cost a fair amount to train, because the total training compute doesn’t go down hugely, but certainly the serving compute goes down a lot and therefore many more people can use those models more cheaply, and therefore experiment with them. And I think that trajectory, to me, feels like it’s going to continue for at least the next three to five years.
(Emphasis mine) [...]
But as we said earlier, I’m not in the AGI intelligence explosion camp that thinks that just by developing models with these capabilities, suddenly it gets out of the box, deceives us, persuades us to go and get access to more resources, gets to inadvertently update its own goals. I think this kind of anthropomorphism is the wrong metaphor. I think it is a distraction. So the training run in itself, I don’t think is dangerous at that scale. I really don’t. And the second thing to think about is there are these overwhelming incentives which drive the creation of these models: these huge geopolitical incentives, the huge desire to research these things in open source, as we’ve just discussed. So the entire ecosystem of creation defaults to production. Me not participating certainly doesn’t reduce the likelihood that these models get developed. So I think the best thing that we can do is try to develop them and do so safely. And at the moment, when we do need to step back from specific capabilities like the ones I mentioned — recursive self-improvement and autonomy — then I will. And we should.
So Suleyman thinks it's OK to train bigger models because it isn't dangerous by itself; if he doesn't train bigger models this won't change other players' behavior, and he does not intend to implement RSI and autonomy. [...]
Rob Wiblin: Yeah. Many people, including me, were super blown away by the jump from GPT-3.5 to GPT-4. Do you think people are going to be blown away again in the next year by the leap to these 100x the compute of GPT-4 models? Mustafa Suleyman: I think that what people forget is that the difference between 3.5 and 4 is 5x. So I guess just because of our human bias, we just assume that this is a tiny increment. It’s not. It’s a huge multiple of total training FLOPS. So the difference between 4 and 4.5 will itself be enormous. I mean, we’re going to be significantly larger than 4 in time as well, once we’re finished with our training run — and it really is much, much better.
[...]
It’s much better that we’re just transparent about it. We’re training models that are bigger than GPT-4, right? We have 6,000 H100s in operation today, training models. By December, we will have 22,000 H100s fully operational. And every month between now and then, we’re adding 1,000 to 2,000 H100s. So people can work out what that enables us to train by spring, by summer of next year, and we’ll continue training larger models. And I think that’s the right way to go about it. Just be super open and transparent. I think Google DeepMind should do the same thing. They should declare how many FLOPS Gemini is trained on.
1 note · View note
ahwait-no-yes · 1 year
Text
Tumblr media
doodle from a while ago but i thought it was kinda fitting given the current jp event lol
60 notes · View notes
Text
I WANT TO TALK ABOUT DANGANRONPA SO BADLY
2 notes · View notes
raytoroapologist · 2 months
Note
Is drawfee getting political on their stream or is it just a fundraiser for Palestinian children
i mean raising money for Palestine and calling for a ceasefire is inherently political? not quite sure what you're asking
they're mostly drawing fun prompts and taking breaks every couple minutes to encourage people to donate or otherwise help out/contact representatives/etc
2 notes · View notes
coffee-keith · 7 months
Text
I'm very strongly considering going into nuclear non-proliferation from physics research side of things. But that would also mean that I would need to learn to handle the existential dread of trying to prevent nuclear war with my work. Also the dread of my work somehow being twisted to make nuclear weapons. And it's hurting my brain as I'm trying to look into it more today. But then again, if I stayed in fundamental nuclear physics, where I am now, that still has a non-zero change of contributing to the development of weapons, but that would be hundreds of years down the line.
2 notes · View notes
meyhew · 1 year
Text
pomegranate is so good they need to start selling fresh seeds by the pound
12 notes · View notes
thisiskatsblog · 11 months
Text
Tumblr media Tumblr media
My city seems to be ready for you Harry, Harries.
2 notes · View notes
fearsgod · 1 year
Text
Tumblr media
I've been awake since 7 am YESTERDAY and went out to lunch, the art store, got home and edited both my carrds before speed running 8 replies ... someone sedate me
6 notes · View notes
beggingheaven · 2 years
Text
The only thing that gives me solace about the fact that my youth is gone and now I am just washed up and undesirable is that at least I have literally over 30,000 photos from when I was hot and skinny to obsess over ☺︎
0 notes
thisisnotthenerd · 3 months
Text
the ratgrinders' potential levels
cannot believe i was right about the xp reqs. the bad kids & the seven get 'special treatment' (milestone leveling and saving the world), while others have to work with xp. which tells you a lot about why people fled during prompocalypse.
ok getting into the algebra now: the rat grinders have gone into the far haven woods every day for the last two years, for 3 hours after school, and 9 hours/day on weekends. presumably they keep this up during the summer.
they have supposedly defeated 80,000 or more of three types of creatures: rats, spiders, and twig blights. there are some variations to what these could be, so here's a list of what this could encompass, assuming the ratgrinders are not facing creatures over CR 1.
giant rat: CR 1/8, 25 XP
swarm of rats: CR 1/4, 50 XP
giant wolf spider: CR 1/8, 25 XP
swarm of spiders: CR 1/2, 100 XP
giant flying spider: CR 1, 200 XP
giant spider: CR 1, 200 XP
ice spider: CR 1, 200 XP
twig blight: CR 1/8, 25 XP
needle blight: CR 1/4, 50 XP
thorn slinger: CR 1/2, 100 XP
vine blight: CR 1/2, 100 XP
razorvine blight: CR 1, 200 XP
thorny: CR 1: 200 XP
the full list is a little difficult to do calculations on, so let's condense it. assume a quarter of the 80000 creatures were CR 1/8, a quarter were CR 1/4, so on and so forth.
how much xp would they earn? how much would they level for the amount they ground? grinded? for?
critical assumption here: in the games i've played, we've always done milestone or zeroed out xp with each level, i.e. after earning 300 xp to get to level 2, you have to earn 900 xp to get to level 3, not 600. this analysis assumes that you have to earn the next levels xp reqs on top of your current total. i'm including the xp chart here to clarify:
level 1: 0 XP, +2, total 0 XP
level 2: 300 XP, +2, total 300 XP
level 3: 900 XP, +2, total 1200 XP
level 4: 2700 XP, +2, total 3900 XP
level 5: 6500 XP, +3, total 10400 XP
level 6: 14000 XP, +3, total 24400 XP
level 7: 23000 XP, +3, total 47400 XP
level 8: 34000 XP, +3, total 81400 XP
level 9: 48000 XP, +4, total 129400 XP
level 10: 64000 XP, +4, total 193400 XP
level 11: 85000 XP, +4, total 278400 XP
level 12: 100000 XP, +4, total 378400 XP
level 13: 120000 XP, +5, total 498400 XP
level 14: 140000 XP, +5, total 638400 XP
level 15: 165000 XP, +5, total 803400 XP
level 16: 195000 XP, +5, total 998400 XP
level 17: 225000 XP, +6, total 1223400 XP
level 18: 265000 XP, +6, total 1488400 XP
level 19: 305000 XP, +6, total 1793400 XP
level 20: 355000 XP, +6, total 2148400 XP
if we went cumulatively, based on the number of creatures the bad kids have defeated, they'd be getting up there in xp. we know they've had opportunities to defeat creatures outside of the quests that we've seen, given the oneshots. thus, i'm going with the second explanation, because otherwise the ratgrinders would be 19th level, and i don't think they are, because it would make any pvp setups super unbalanced, which are neither fun to play nor watch. this puts them on a little more even ground and emphasizes the amount of work it takes to xp grind to level against milestone leveling.
for the CR 1/8s: assuming roughly 20,000 creatures, they'd get 25 XP per, which means 500,000 xp. that's cumulatively enough to get to level 13, on just those creatures. divided 6 ways, assuming the ratgrinders have 6 members, it's 83,333.33, which is enough to get you to 10th level cumulatively and 8th non cumulatively.
this scales up to the 1/4s, 1/2s and the 1s since the xp gains double for each challenge rating rather than plateauing as they do at higher levels.
for the CR 1/4s: 1,000,000 xp. that's cumulatively enough to get to level 16 on just those creatures. divided 6 ways, assuming the ratgrinders have 6 members, it's 166,666.66, which is enough to get you to 15th level cumulatively and 9th non cumulatively.
for the CR 1/2s: 2,000,000 xp. divided 6 ways, assuming the ratgrinders have 6 members, it's 333,333.33, which is enough to get you to 19th level cumulatively, and 11th level non cumulatively.
and for the 1s, 4,000,000 xp. well over what you'd need to get to level 20, on just the CR 1s. divided 6 ways, assuming the ratgrinders have 6 members, it's 666,666.66, which is well over 20th level cumulatively, and 14th level non cumulatively.
using this estimate and adding all of this up, each member of the ratgrinders would have gathered enough xp to be level 20 cumulatively, and level 17 non cumulatively.
obviously the actual numbers would scale differently; initially, they would likely have to tackle these creatures as a party, but over time would take care of them individually. this is a bunch of kids doing the intro to class assignment for every assignment for two years straight.
level 20 seems extreme for the aguefort adventuring academy; let's scale it down a bit. the creatures specifically mentioned are probably giant rats, giant wolf spiders, and twig blights, based on the descriptions from jawbone.
all of these are CR 1/8, or 25 XP each. 80000 would give an xp total of 2,000,000, which would put each of the ratgrinders at around 11th level, a little higher level than the bad kids at the moment. however, since their fighting prowess scaled up, and they're probably going out in elmville and actively hindering the bad kids in some way, that level is very likely to increase.
what we saw in the episode
now the sticking point is mary ann rolling a 35. we know she got some kind of transmutation buff. a little tricky wording from brennan; fabian had enhance ability on, which is a transmutation spell. he did not say it was enhance ability.
mary ann is a barbarian, so she already gets advantage on athletics if she's raging, which i assume she was. the buff probably wouldn't be something that grants advantage.
assuming the lower estimate of 11th level, mary ann would get a +4 proficiency bonus, and i'm assuming she has 20 in strength, so +5 to her strength based skills, for a total of +9. at the high estimate of level 17, she would have a +6 to her proficiency bonus, which would give her a total of +11 to athletics. this is still not high enough to get a 35, even on a nat 20, which brennan would have declared if he had rolled one. she could conceivably accomplish this with the brawny feat, which allows for expertise in the athletics skill, which would give her a +17, meaning she could hit a 35 on a 18.
or, the buff was something like skill empowerment, which is a 5th level transmutation spell that gives the target expertise in a skill that they already have proficiency in. this spell is available to bards and wizards, among other classes, both of which we presume are in the ratgrinders. ruben could have cast skill empowerment on mary ann and given her bardic inspiration (lower estimate: d10, higher estimate: d12), both of which would have enabled that 35.
judging by the implication that she could not accomplish that feat without some kind of buff, i'm going with the latter explanation.
anyway i did too much math for this to not go in the stats series, or the school series. so this will be added to the spreadsheet later.
i hope this is useful.
421 notes · View notes
gardenerofstars · 4 months
Text
sorry for being on tumblr for 80000 hours today i will depart now
19 notes · View notes
metastable1 · 2 years
Link
0 notes
reliquarian · 26 days
Text
being at work is thinking about the 80000 things you want to do that arent work, not being at work is staring at a wall for hours while you spiral
8 notes · View notes
nicole-the-hololynx · 6 months
Text
anyways shoutouts to the latest staff wave for coming in, fucking up the site beyond recognition and then leaving. your 80000 man hours of work will never be forgotten since they're the reason this site no longer has a chronological search function o7
6 notes · View notes
Note
Dr. C x Momdebra. Thoughts?
you sent me this while i was not sober i wish you couldve heard me laugh
um ok so i actually have lots of thoughts after thinking about it all night. my FIRST thought was that abracadaniel's "were you guys talkin about me ',:D" makes me think she has something going on with ABRACADANIEL, but (brushes that aside) hes gay smh. but he clearly wants cal's approval.
WHICH HE'D WANT IF SHE WAS DATING HIS SISTERR OHHH
also other thoughts. it explains why she was so completely bland about cadebra this entire time. youd think shed either love her or hate her, given that deb's a main character the only other member of the Weak House. but she ignores her during class entirely. no playing favorites in doctor cal's class no sir ignores the whole pep thing. she DOES very very gently go check on her when pep runs off and blaines out there smearing his name.
well. i think itd be fun. i mean. i am. very angry at this lady for everything she did and she is not very nice (WHICH LADY? YES). but cal has this very supportive kind veneer i think she really did love those kids to some extent (bro it stops when you leave them lying in an alley alone to die. but she DID just want everyone to have a good time in her class. and "ohh, hey, ease up" lives in my mind forever, as insidious as it was, given who she was saying it to and when. "thats my favorite student you're talking about." augh. okay. that really does hit.)
BUT i think itd be fun. because momdebra is soooo overbearing and intense, mom wants cadebra to push herself so far, mom is insistent that cadebra follow a specific path and track specific numbers to make herself the perfect wizard as dictated by arbitrary levels and abilities. whereas, doctorate though she may have, cal seems so much more holistic, and just wants everyone to follow their own path and learn their own way, and fill their heads with joy and knowledge, she wants everyone to delight in the pursuit of learning and she will stand out in the rain and the mud with you and gently explain the cultural and historical significance of every artifact you find for hours until you are satisfied. like thats her deal. i do not forgive her for the things she did (i need to remind myself of that because im MAD ! she left blaine there defenseless she WATCHED spader die!) but i do think she has a truly kind side to her because learning and pleasure in doing so are actually important to her. just, not as important as bringing about the second age of terror. anyways. sorry this turned into me trying to unravel my cal thoughts.
so i think it would be really funny to see her and momdebra interact over deb. as partners. like mom keeps sending a miserable deb back to work on her incantations over and over and cal is like oh, my goodness, are you still on that? deb, let's go look at some bugs, in the park. (there's a secret hidden lesson buried in the bug-observing expedition) (also she tells her what a good job she's doing and how she can be a wizard in any way she wants no matter what her mom says) (it's corney as hell) (cuz that's cal for you). i would LOVE to think this chills momdebra out a little bit and she learns Acceptance. but i doubt it. she is just blinded by her cool 20 foot tall girlfriend who is so super good at magic. did you know doctor caledonius is level 80000, cadebra??? YOU could be level 80000 so EASILY if you just APPLIED YOURSELF.
anyways it's so sad we cant ship damn near any of the fuckin adults together without running into "SO THEY WERE OKAY WITH LETTING THEIR PARTNER'S DAUGHTER DIE. SO THEY WERE OKAY WITH HELPING THAT TO HAPPEN." like. bro. this is so sad. cal threw deb at her friends and said "get rid of her". cadebra doesnt even have like,,,, any cool adults to run to about this does she. cal in this scenario was probably a level-headed nice person in her life who seemed to support her and want her to be herself. and she went to brain wizard when she was scared that one time. and now what. she went to abracadaniel when she was scared that other time but whats he gonna do??? huddle her under a blanket and go yell at his sister for dating a maniac until he remembers he's too scared of her to do that and then come to hide under the blanket with deb.
i dunno how to end this post i got words. my thoughts: 9/10 excellents fun scenario to play in. i absolutely wanna see what that does to cadebra's life.
5 notes · View notes