Tumgik
unknought · 1 month
Text
Hey you should read Tatterdemalion, a story I wrote a few years ago. It's a post-apocalyptic fantasy about people trying to survive surrounded by things much bigger and scarier than them --angels and eldritch horrors and old machines-- in a world that's been repeatedly torn open and stitched back together. It's also, if you squint just a bit, a story about growing up queer in a small town.
It's unfinished and it's never going to be finished, at least not in its current form. But I think the ten thousand words of it that I did write are pretty good on their own.
Here's the first chapter and here's the table of contents.
content warnings below the cut:
body horror and parasites
coercive sex and attempted rape, neither described in explicit detail
death
hell
giant spiders
11 notes · View notes
unknought · 2 months
Text
Yeah I think "shit's fucked up yo" is the core of it, or maybe a bit more specifically it's conveying a particular sense of confused helpless outrage that is a common reaction to paying attention to the ways in which shit is fucked up. And taking a particular emotion and saying "this is what it feels like" is a pretty common and valuable thing for short stories to do.
The story being an Omelas response does incline us to look for a thesis, but if it had actually been structured around proving some point I think it would have been a lot weaker. And actually I think the subverted expectation there, where we're looking for a "meaning" and largely failing to find one, is really effective at reinforcing the overall feeling of the story.
(here's the story, for context)
re that omelas story yesterday - decent enough read, but on some thought i don't think it really offers much that the original didn't. thesis basically seems to be "shit's fucked up yo". however, the bar is so far in the floor for 'omelas response' after that abomination jemisin put out a few years ago that I was willing to give it a lot of (perhaps too much) slack lmao
49 notes · View notes
unknought · 3 months
Text
By "things people want" are you including things like wanting not to work twelve-hour shifts and wanting not to die in a global-warming-caused heat wave? If you're not, if you just mean something like "things people want (consumption-side)" then degrowthers do actually think there needs to be less of that; they don't think it's sufficient to try to produce the same consumer value more efficiently.
If you are including things like that, then the distinction between things people want and metrics like GDP is very far from being a mere accounting question; it's the entire bone of contention. Degrowth is in many cases more or less characterized by seeing the gap between GDP (or similar metrics of economic growth) and human flourishing as larger than other people do, to the extent that decreasing GDP is compatible with, and indeed necessary for, making people collectively better off.
economic disextensification
Okay, ugly word. But how many times have you heard this conversation?
degrowther: we should degrow the economy growther: that would make people's live worse degrowther: not necessarily! we could reduce our natural resource use while technical progress allows us to transform it into utils more effectively, so that quality of life keeps going up growther: that's growth dumbass
Somehow, the conversation always seems to stop there (at least in my experience), even though it's where it might start getting useful!
Economic growth can be broken into extensive growth (more people and natural resources) and intensive growth (better technology lets us do more with the same factor inputs.) So what the soi-disant degrowther is arguing for is disextensification.
Is disextensification desireable? This intersects with more targetted questions like:
How much can extensive and intensive growth be decoupled? (In a Malthusian model they are inversely related; in an endogenous growth model they're closely related)
What kinds of ecological limits on our civilization are we running up against?
What level of intensive/TFP growth can we expect such that total per capita growth remains positive, if we were to experience disextensification?
Does it make sense to think about total extensive use overall, or does it make more sense to just take particular cases of land use, etc, that may have undesirable effects and balance those against their economic gains? (Maybe turning rainforest into cattle raising is net negative, but turning grasslands into wheat fields is net positive.)
I have intuitions on all of these, but not firm answers.
(Some deep green types might say "fuck it, burn it all down" on semi-deontological grounds, and as with their counterparts e/acc I really can see where the intuition is coming from, but as with e/acc I can't go there because I love humanity and want the human project to continue.)
228 notes · View notes
unknought · 3 months
Text
Yeah I think in the OP, most of the work is happening in the jump from the degrowther saying they want more "utils" to this being characterized as wanting to "do more" with the same resources (implicitly meaning more production, otherwise it doesn't make sense to say that economic growth is that plus growth in people and resources). Of course if you elide the distinction between "how good are people's lives" and "how much production is happening" you're not going to get much out of degrowth arguments!
economic disextensification
Okay, ugly word. But how many times have you heard this conversation?
degrowther: we should degrow the economy growther: that would make people's live worse degrowther: not necessarily! we could reduce our natural resource use while technical progress allows us to transform it into utils more effectively, so that quality of life keeps going up growther: that's growth dumbass
Somehow, the conversation always seems to stop there (at least in my experience), even though it's where it might start getting useful!
Economic growth can be broken into extensive growth (more people and natural resources) and intensive growth (better technology lets us do more with the same factor inputs.) So what the soi-disant degrowther is arguing for is disextensification.
Is disextensification desireable? This intersects with more targetted questions like:
How much can extensive and intensive growth be decoupled? (In a Malthusian model they are inversely related; in an endogenous growth model they're closely related)
What kinds of ecological limits on our civilization are we running up against?
What level of intensive/TFP growth can we expect such that total per capita growth remains positive, if we were to experience disextensification?
Does it make sense to think about total extensive use overall, or does it make more sense to just take particular cases of land use, etc, that may have undesirable effects and balance those against their economic gains? (Maybe turning rainforest into cattle raising is net negative, but turning grasslands into wheat fields is net positive.)
I have intuitions on all of these, but not firm answers.
(Some deep green types might say "fuck it, burn it all down" on semi-deontological grounds, and as with their counterparts e/acc I really can see where the intuition is coming from, but as with e/acc I can't go there because I love humanity and want the human project to continue.)
228 notes · View notes
unknought · 3 months
Text
my mom (@kamahele) is missing.
1K notes · View notes
unknought · 4 months
Text
I'm not going to try to summarize the chaos happening at OpenAI right now, for a variety of reasons including that it is still very much ongoing (though Matt Levine has what seems to me to be a good writeup of the current state of things). But suffice to say that it is not helping me be more normal about this.
I have been insane about artificial intelligence existential risk recently and what follows is an expression of that. There's not much of this which I actually believe is true; take it as a creative writing exercise maybe.
Suppose you're worried that advances in artificial intelligence might result in the creation of an unfriendly superintelligence, and you want to do research in AI safety to help prevent this. Broadly speaking, you have two options: Either you can work closely with (or for) people creating state-of-the-art AI, or you can… not do that.
Choosing to keep yourself separate dooms you to irrelevance. You won't know what's coming in enough detail for your safety research to actually target its problems, and even if you do know exactly what people should be doing differently, you aren't in a position to make those decisions.
Choosing to be closely involved with the development of cutting-edge AI is a pretty common strategy; maybe more than one might expect given that it involves helping people that you think are bringing about the end of the world. Astral Codex Ten gave the analogy:
Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety” - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.
This is how AI safety works now.
The central problem with this strategy is that progress in AI capabilities makes money and AI safety does not. If an organization is working on both capabilities and safety but capabilities makes them lots of money, and safety gets them nothing except goodwill with a certain handful of nerds, it's pretty easy to guess which they'll focus on. If, furthermore, to make money they need to make actual demonstrable progress in developing AI, but to win that goodwill it's sufficient to say "we're doing lots of safety research, but it'd be too dangerous for what we've learned to be public knowledge", well. Is it any surprise that there's been a lot more headway in one than the other?
"But wait!" you might say, particularly if it's the mid-2010's. "These aren't the only options. Sure, any tech giant will just be focused on what makes money, but I can make, or join, an organization that does capabilities research but has AI safety built into its founding values. It can be a nonprofit so it won't matter that all the money is in capabilities research. And it can have the principles of openness and democratization, so its discoveries will benefit everyone".
And that gets you OpenAI. Since its founding it's stopped being straightforwardly a nonprofit. I don't understand exactly what its current legal status is, but I've seen it described as a "nonprofit with a for-profit subsidiary" or a "capped for-profit". I'm just going to quote Wikipedia regarding the transition:
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit capped at 100 times any investment.[25] According to OpenAI, the capped-profit model allows OpenAI LP to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say "I'm going to OpenAI, but in the long term it's not going to be disadvantageous to us as a family."[26] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[27] Prior to the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[28]
Since then OpenAI has also received billions of dollars of investment from Microsoft. The "open" part of "OpenAI" has pretty much gone out the window; they've become notoriously tight-fisted with information about their research, even in comparison to the big tech companies. A big part of that makes large language models scary is that we don't understand very well what's going on inside them; this is substantially more true of GPT-4 than any of the other big-name LLMs right now, because OpenAI refused to share almost anything about how it works. When they started being more secretive I believe they cited AI safety issues; by now they're admitting that they're motivated by "competitive and safety concerns" but I don't think anyone really doubts that the money is the real motivator there.
Are they making progress on AI safety? To the best of my knowledge, there's not much external evidence for it. I don't think they've shared any safety research that they've done, and from the outside all we see are increasingly powerful models coming out at an alarming rate and large amounts of money going in. It doesn't look good!
So what could you do, or what could you have done? It kind of looks like the answer is "nothing". If you're not near the new models you don't matter, and if you are, there's so much money to be made that (on an organizational level if not an individual one) you have no choice but to abandon everything else you were trying to do in favor of making a profit. Capitalism is a profit-maximizer-maximizer, and anything it touches it makes its own.
The analogy between unfriendly AI and corporations or capitalism as a whole is one that's been much-discussed so I won't really go into it here, but one thing I want to emphasize is that from an "agentic" perspective, what it means for the economy to want something is that there is a lot of money to be made doing that thing, and the economy is very good at getting what it wants.
So (here's where I lose you if I haven't already (I should have already lost you)) what if capitalism is a sort of half-formed version of the more powerful thing it's trying to create/become? Agents of many kinds want to replicate themselves; Moloch wants to be lifted to Heaven.
The Roko's Basilisk thought experiment describes how, under certain fairly idiosyncratic assumptions about game theory, an unfriendly superintelligence can take actions to influence the behavior of people living before its creation, so as to bring about its own existence. Here there's a similar situation, although the philosophical underpinnings are much less complicated: it's capable of influencing the present because it already exists in a nascent state.
It can't be stopped because it already ate everything a long time ago, and we're just living in its belly waiting to be digested.
78 notes · View notes
unknought · 6 months
Text
(in my very limited and possibly misguided understanding,)
Kind of in between? It's more of an axiom than a derivation in the Copenhagen interpretation, but it can't be an axiom in many-worlds because many-worlds doesn't take measurement/observation as a primitive concept, and not taking measurement as fundamental is kind of its big selling point.
So for many-worlds to explain observed phenomena we would need some way of deriving the Born rule from it, and as I understand it there isn't really a consensus about whether this can be done. (Some people say they've done it; other people say that those purported derivations are using circular reasoning.)
And a major obstacle to doing this is that there aren't really any probabilities or randomness in the rules that govern the evolution of an unobserved system; in the Copenhagen interpretation, the way that classical probability comes into the picture is specifically and exclusively by way of the Born rule governing what happens when you measure things. So it's somewhat unclear where those probabilities could even potentially be coming from in the many-worlds interpretation.
when people say "many worlds can't derive the born rule" is there some way to derive the specific probabilities in other interpretations that fails in many worlds, or is it just that probability isn't obviously meaningful if both options are guaranteed to happen in different worlds?
14 notes · View notes
unknought · 7 months
Text
14 notes · View notes
unknought · 9 months
Note
Hey, I mean this in the nicest way possible, but dinosaurs are divided into two big groups based on whether they have a "bird" type of pelvis or a "reptile" type of pelvis. Dinosaurs are both "birds" and "reptiles". It isn't... like there's only one answer. Your poll made me sad
I am aware that it's not like there's only one answer, that's why I provided as many options as Tumblr would allow me. If I'm understanding you correctly, your answer is in some ways pretty idiosyncratic --I've never seen a taxonomist say that ornithischians ("bird-hipped" dinosaurs) are birds, and doing so would make "birds" a very strange-shaped taxon, since the modern birds we're familiar with evolved from saurischians ("lizard-hipped" dinosaurs), not ornithischians-- but fortunately for you it still falls under one of the options listed, the third one on the list: "Birds are dinosaurs but not reptiles; thus dinosaurs are not (all) reptiles". If I'm misunderstanding you and your answer is not encompassed by any of the answers I wrote out, that's what the "Other" option is for. If having to select "Other" on my Tumblr poll makes you sad, you are probably taking it too seriously.
16 notes · View notes
unknought · 9 months
Text
43 notes · View notes
unknought · 10 months
Text
Scenario: There's a nationwide referendum on some political issue you consider very important. It's binary: There are just two possible outcomes that are being decided between. It's closely contested: Pollsters are undecided about which side has a majority.
Instead of having an election, the issue is going to be decided by the following procedure: Any eligible voter can put their name into a lottery, and whoever wins the lottery decides which of the two possible outcomes happens.
52 notes · View notes
unknought · 1 year
Text
about the most recent Almost Nowhere chapter (chapter 40) and unreliable narrators
I think an important context for everything we see from Hector's perspective is this bit from chapter 34:
Hector knows.  Hector remembers all the shit we did.  He remembers the details.  He’d fucking better remember the details.
But Hector and our authors are not exactly on speaking terms right now, so that approach is a no-go.
So basically I’m just gonna give you the five-minute Cliff’s Notes version.  That’s all I know, because I’m not really me.  I’m just our authors doing some fucked-up role-play thing.  The real Annabel Lee is fucking dead, okay?
When we get scenes from the perspective of e.g. Cordelia, those are based on Cordelia's memories of her own experiences. When we get scenes from the perspective of Hector Stein, those are at best educated guesses, and at worst propaganda from his enemies. I'm thinking here of Umberto Eco's line from Ur-Fascism,
Thus, by a continuous shifting of rhetorical focus, the enemies are at the same time too strong and too weak. Fascist governments are condemned to lose wars because they are constitutionally incapable of objectively evaluating the force of the enemy.
and the way Hector always comes across as simultaneously pathetic and implausibly badass. (Not that Sylvie & co. are particularly fascist in most other respects.)
There's a sort of narrative fakeout in chapter 40: We're given a lot of reason to expect that we're about to learn what Hector is up to, both because much of the chapter is written from his perspective and because Sylvie undertakes a concerted effort to find out what he's doing. We don't, ultimately, learn very much about Hector's capabilities, what he's been doing, or what his plans are. And the reason we don't learn much is that Hector doesn't want us to. More literally, of course, he doesn't want Sylvie to find out, but it's made very salient in this chapter that our perspective is limited to that of the authors', and whatever they don't end up finding out, we don't get to know.
Almost Nowhere has a lot of focus on the idea of being-put-into-words as an act of violence. Most notably of course is the damage that Azad's translation does to the anomalings, and their attempts to mitigate and contain that damage incite almost everything else in the story. But of all the characters, it's Hector, not the anomalings, who is most successful at keeping himself from being a contained and comprehensible element of the story we are reading. For all the discussion of alien experiences as being incomprehensible and "not for us", it seems like a lot of the biggest questions that we as readers don't get to know the answers to are, ultimately, going to be variations on "what is Hector Stein's whole deal?"
But what about Twenty-Seven? She is also not one of the authors; does that mean that the many sections from her perspective are also fabricated? I don't think so. Cordelia is one of the authors, and Cordelia arbitrates with Twenty-Seven. Through her we can get a clear picture of parts of the story that otherwise might have been as murky and speculative as the parts from Hector or Annabel.
It’s a good thing Marika doesn’t know that I arbitrate with my girlfriend, she thinks.
Talk about a security hole, she thinks.
One last point, considerably more out on a limb than the rest of this (which was already pretty speculative): Grant is not an unintelligent guy, but almost every time we see him he's bumbling around and disoriented and the butt of the narrative joke. The recent instances of that are partly explained by Moon fucking with his head, but it goes a lot further than that. What if he's written that way because Azad wrote those sections of the narrative, and Grant just saw them and went "yeah, sure"?
32 notes · View notes
unknought · 1 year
Text
I have been insane about artificial intelligence existential risk recently and what follows is an expression of that. There's not much of this which I actually believe is true; take it as a creative writing exercise maybe.
Suppose you're worried that advances in artificial intelligence might result in the creation of an unfriendly superintelligence, and you want to do research in AI safety to help prevent this. Broadly speaking, you have two options: Either you can work closely with (or for) people creating state-of-the-art AI, or you can… not do that.
Choosing to keep yourself separate dooms you to irrelevance. You won't know what's coming in enough detail for your safety research to actually target its problems, and even if you do know exactly what people should be doing differently, you aren't in a position to make those decisions.
Choosing to be closely involved with the development of cutting-edge AI is a pretty common strategy; maybe more than one might expect given that it involves helping people that you think are bringing about the end of the world. Astral Codex Ten gave the analogy:
Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety” - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.
This is how AI safety works now.
The central problem with this strategy is that progress in AI capabilities makes money and AI safety does not. If an organization is working on both capabilities and safety but capabilities makes them lots of money, and safety gets them nothing except goodwill with a certain handful of nerds, it's pretty easy to guess which they'll focus on. If, furthermore, to make money they need to make actual demonstrable progress in developing AI, but to win that goodwill it's sufficient to say "we're doing lots of safety research, but it'd be too dangerous for what we've learned to be public knowledge", well. Is it any surprise that there's been a lot more headway in one than the other?
"But wait!" you might say, particularly if it's the mid-2010's. "These aren't the only options. Sure, any tech giant will just be focused on what makes money, but I can make, or join, an organization that does capabilities research but has AI safety built into its founding values. It can be a nonprofit so it won't matter that all the money is in capabilities research. And it can have the principles of openness and democratization, so its discoveries will benefit everyone".
And that gets you OpenAI. Since its founding it's stopped being straightforwardly a nonprofit. I don't understand exactly what its current legal status is, but I've seen it described as a "nonprofit with a for-profit subsidiary" or a "capped for-profit". I'm just going to quote Wikipedia regarding the transition:
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit capped at 100 times any investment.[25] According to OpenAI, the capped-profit model allows OpenAI LP to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say "I'm going to OpenAI, but in the long term it's not going to be disadvantageous to us as a family."[26] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[27] Prior to the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[28]
Since then OpenAI has also received billions of dollars of investment from Microsoft. The "open" part of "OpenAI" has pretty much gone out the window; they've become notoriously tight-fisted with information about their research, even in comparison to the big tech companies. A big part of that makes large language models scary is that we don't understand very well what's going on inside them; this is substantially more true of GPT-4 than any of the other big-name LLMs right now, because OpenAI refused to share almost anything about how it works. When they started being more secretive I believe they cited AI safety issues; by now they're admitting that they're motivated by "competitive and safety concerns" but I don't think anyone really doubts that the money is the real motivator there.
Are they making progress on AI safety? To the best of my knowledge, there's not much external evidence for it. I don't think they've shared any safety research that they've done, and from the outside all we see are increasingly powerful models coming out at an alarming rate and large amounts of money going in. It doesn't look good!
So what could you do, or what could you have done? It kind of looks like the answer is "nothing". If you're not near the new models you don't matter, and if you are, there's so much money to be made that (on an organizational level if not an individual one) you have no choice but to abandon everything else you were trying to do in favor of making a profit. Capitalism is a profit-maximizer-maximizer, and anything it touches it makes its own.
The analogy between unfriendly AI and corporations or capitalism as a whole is one that's been much-discussed so I won't really go into it here, but one thing I want to emphasize is that from an "agentic" perspective, what it means for the economy to want something is that there is a lot of money to be made doing that thing, and the economy is very good at getting what it wants.
So (here's where I lose you if I haven't already (I should have already lost you)) what if capitalism is a sort of half-formed version of the more powerful thing it's trying to create/become? Agents of many kinds want to replicate themselves; Moloch wants to be lifted to Heaven.
The Roko's Basilisk thought experiment describes how, under certain fairly idiosyncratic assumptions about game theory, an unfriendly superintelligence can take actions to influence the behavior of people living before its creation, so as to bring about its own existence. Here there's a similar situation, although the philosophical underpinnings are much less complicated: it's capable of influencing the present because it already exists in a nascent state.
It can't be stopped because it already ate everything a long time ago, and we're just living in its belly waiting to be digested.
78 notes · View notes
unknought · 1 year
Text
Stuff about the Kelly criterion
I’ve been off Tumblr for a little while, but there’s apparently been some discussion about the Kelly criterion, a concept in probability, in relation to some things Sam Bankman-Fried said about it and how that relates to risk aversion. I’m going to do what I can to explain some aspects of the math as I understand them.
The Kelly criterion is a way of choosing how much to invest in a favorable bet, i.e. one where the expected value is positive. The Kelly criterion gives the “best” amount for a bunch of different senses of “best” in a bunch of different scenarios, but I’m going to restrict to one of the simplest ones.
Suppose you have some bet where you can bet whatever amount of money you want, you have probability p of winning, and you gain b times the amount you bet if you win. (Of course, if you lose, you lose the amount you bet.) Also suppose you get the opportunity to make this bet some large number n of times in a row, you have the same probabilities and payoff rules for each of them, and they’re independent events from each other. The assumption that all of the bets in the sequence have the same probabilities and payoff rules is made here to simplify the discussion; the basic concepts can still hold when there are a mix of different bets, but it’s a lot messier to state things and reason about them.
Also suppose that your strategies are limited to choosing a single quantity f between 0 and 1 and always betting f times your total wealth at every step. This is a pretty big restriction, and it too can be relaxed at the cost of making things much messier. But even with this restriction we’ll be able to compare the strategy prescribed by the Kelly criterion to the “all-in” strategy of always betting all of your money.
So what is the best choice of f? The Kelly criterion gives an answer, but the sense in which it’s the “best” is one that it’s not obvious should apply to any choice of f. I’ll state it here but keep in mind that until we’ve done some more calculation, we shouldn’t assume that that there is any choice of f which is the best in this sense.
The Kelly criterion gives a choice of f such that, for any other choice of f, the Kelly criterion produces a better result than the other choice with high probability. Here “high probability” means that the probability that the Kelly choice outperforms the other one goes to 1 as n goes to infinity.
So why is this possible?
Let Xi be the random variable representing the ratio of the money you have after the ith bet to the amount you had before it. So your final wealth is equal to your starting wealth times the product of the Xi for i from 1 to n. Also these Xi are independent identically distributed variables. (We can describe their distribution in terms of p, b, and f but the exact details aren’t too important to the concepts I want to communicate.) Sums of random variables have some nicer things that can be said about them than products, so we take the logarithm. The logarithm of your final wealth is the log of your starting wealth plus a sum of n independent variables log(Xi).
Now, the expected value of that sum is n times the expected value of one of the individual summands, and the (weak) law of large numbers tells us that with high probability the actual value of the sum will be close to that. (To be rigorous about this: For any constant C, the probability that the sum will be further than Cn away from its expected value goes to 0 as n goes to infinity.) So for any betting strategy f, define r(f) to be the expected value of log(Xi). So if we have any two strategies f and f’, the log of your final wealth following strategy f minus the log of your final wealth following strategy f’ will be about r(f)n-r(f’)n, and so will be positive with high probability if r(f)>r(f’). (If you understood the rigorous definition in the previous parenthetical, you should be able to make this argument rigorous as well.) Thus with high probability the log of your final wealth will be greater using strategy f than strategy f’. Since log is an increasing function, this is equivalent to saying that with high probability, f will result in a greater final wealth than f’.
Then if you pick f such that r(f) is maximized, then for each other choice of f, you’ll outperform that choice with high probability. This is what the Kelly criterion says to do. Maximizing r(f) can be equivalently described by saying that at each bet, you bet the amount that maximizes the expectation of the logarithm of the amount you’ll have after the bet.
A pitfall to avoid here: Although the log of the final wealth can be said to be “about” a certain value with high probability, we can’t really say that the final wealth is guaranteed to be “about” anything in particular. Differences that we can consider to be negligibly small when we’re looking at the logarithm can balloon to very large differences when we’re looking at the actual value, and it is very possible for one experimental trial using a given strategy to yield something many times larger than another trial using the same strategy where you’re a little less lucky.
The Kelly criterion is not the strategy that maximizes the expected amount of money you have at the end. The best strategy for that goal is that is the one where you put all of your money in on every bet. This isn’t inconsistent with the previously stated results; in almost all cases the Kelly criterion outperforms the all-in strategy (because the all-in strategy loses at some point and ends up with no money). But in the very unlikely event that you win every single one of your bets, you end up with an extremely large amount of money, so large that even when you multiply it by that very small probability you get something that’s larger than the expected value of any other strategy.
What if, instead of trying to maximize the expected dollar payoff, you have some utility function of wealth, and you’re trying to maximize the expected value of that? Well, it depends what your utility function is. If your utility function is the logarithm of your wealth, the Kelly criterion maximizes your expected utility; in fact, in this case we don’t even need to assume n is large or invoke the law of large numbers. But going back to the case of large n, there are a lot of other utility functions where the Kelly criterion is also optimal. Think about it like this: the Kelly strategy outperforms any other strategy in almost all cases; the only situation where you might still prefer the other strategy is if in the tiny chance that you get a better outcome, your outcome is so much better than it makes up for losing out the vast majority of the time. So if your utility function grows slower than the logarithm, you care even less about that tiny chance of vast riches than you would if you had a logarithmic utility function, so the Kelly criterion continues to be optimal. More generally, I think it can be shown that when comparing the Kelly criterion to some other strategy, the probability of that other strategy doing better than it decays exponentially in n. Since the amount the other strategy can obtain in that tail situation grows at most exponentially in n, this implies that as long as your utility function grows slower than nε for all ε>0, you won’t care about the tail so the Kelly criterion is still optimal. If your utility function grows faster than that, i.e. if there is some ε>0 such that your utility function grows faster than nε, then I think for sufficiently favorable bets, all-in comes out ahead again.
Okay but how does this all of this apply in the real world? Honestly I’m not sure. If your utility function is your individual well-being, it seems very likely to me that that grows logarithmically or slower; if what you care about is maximizing the amount of good you do for the world by charitable donations, I think there is some merit to SBF’s argument that you should treat that utility as a linear function of money, at least up to a certain point. But even he acknowledged that it drops off significantly once you get into the trillions, and since the reasons for potentially preferring riskier strategies over the Kelly criterion hinged on exponentially small probabilities of exponentially large payoffs, I think that that trillion-dollar regime might actually be pretty relevant to the computation.
Really any utility function should be eventually constant, but in that case the Kelly criterion ceases to be optimal in the way discussed before. For large enough n, it will get you all the money you could want, but so will any other strategy other than all-in and “never bet anything”. Obviously this is not a good model of how the world works. To repair this we probably want to introduce time-discounting, but to make sense of that we need to have some money getting spent before the end of the experiment rather than all of it available for reinvesting, and by this point things have gotten far enough away from the original scenario that it’s hard to tell how relevant the conclusions from it even are. It seems like it’s a useful heuristic in a pretty wide range of scenarios? But I have no idea whether SBF was right that he was not in one of them.
To be clear, none of this is to excuse his actions; whether or not he should have been applying the Kelly criterion, I think “committed billions of dollars of fraud” does a better job of capturing what he did wrong than “was insufficiently risk-averse”.
68 notes · View notes
unknought · 1 year
Text
So: Kelsey has confirmed in a fairly public Discord server that the screenshot I posted earlier was genuine. I was hoping that someone else would end up sharing that confirmation to Tumblr, but to my knowledge this has not happened. As such, if you're not in that server, me saying that Kelsey confirmed it should probably not convince you of anything, but you're not the primary intended audience for this post.
Kelsey says that shortly after posting the messages in the screenshot I shared earlier, she realized that it was overstated and posted a correction, and that by omitting this correction I and/or my source was deliberately misrepresenting what she said. I believe this is a significant distortion of the truth. I believe the "correction" she's referring to is the following:
Tumblr media
This is not a correction, it is a reframing. It does not contradict anything she said in the screenshot I posted earlier, nor anything I said based on that in my post. Although it is clearly intended to downplay the involvement of the EAs in question, it doesn't really say very much of substance at all. "I don't know what the sanctions would have been like without their involvement; maybe they would have even been worse" is something that could be truthfully said about anyone with any level of involvement. "I am very sure EAs did not cause there to be sanctions - that is definitely a very high level administrative decision" puts an upper bound on their involvement, but a very high one, and I was pretty explicit when I was talking about this that I wasn't claiming that anyone high up in the administration and involved in making the final decision was motivated by AI concerns. And honestly, regardless of what she said later, I don't think there is anything you can accidentally overstate as "EAs wrote the semiconduct export controls" that wouldn't itself be pretty damning.
Anyway, this is the last thing I'm going to say on this topic. I don't think I have much relevant non-public information at this point, and while it's tempting to try to be involved in evaluating future developments, arguing for my interpretation of the facts, keeping the known information documented in a single place, etc., there are lots of other people who are at least as well-positioned to do those things as I am.
17 notes · View notes
unknought · 1 year
Text
UPDATE: I got permission to share the following screenshot. This is Kelsey Piper aka theunitofcaring, a prominent member of the effective altruist community, talking in a Discord channel.
Tumblr media
I did not take the screenshot but I trust it to be genuine. It is the main source for the claims I made above, so if there are any details where what I said seems unsupported by the screenshot, you should probably go with the screenshot.
(This is not a bot; I believe that tag shows up when people post to Discord via IRC or other messaging services rather than posting directly to Discord.)
There's a transcript in the readmore.
Transcript: A series of messages from a Discord user with the display name "nirvana" and the "bot" tag. The messages read as follows:
anyway, I should start again and provide all the context I want people to speculate with - EAs wrote the semiconduct export controls - this does slow down China, and many people think the point was just slowing down/winning an arms race with China - that was not the primary aim in writing the semiconduct export controls, though slowing stuff down Generally Considered Good - of at least equal and possibly greater importance was that a bunch of global governance stuff looks way easier if there is one chip supply chain and no alternatives - specifically, the person I talked to thought it enabled some hardware based compute controls which have promise of actually helping (with adherence to some kind of global agreement to not train powerful systems/maybe to not do ML at all?) - I had previously thought hardware based compute controls were known impossible - but I'm pretty impressed with the people who wrote the semiconductor export controls, so I'm thinking whether there's something I'm missing that might be part of their model of why it might work to do something hardware based - if I'm not missing something, the next likeliest explanation imo is that I misunderstood something
Effective altruists wrote the semiconductor export restrictions
The Biden administration recently imposed restrictions on exports to China of certain high-end microchips and technologies used to produce them. This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls, but the people who actually wrote the restrictions apparently had other motivations.
They were members of the effective altruist community and were motivated by the idea that if there is a single supply chain for the high-end chips (i.e. a US monopoly) it will become more politically feasible to impose restrictions on how those chips can be used. Specifically, that it might be possible to make the chips in such a way that they cannot be used to train powerful machine learning models. For context, the concern about AI among effective altruists is that a sufficiently advanced system might wipe out humanity.
I don’t know a lot about foreign policy but my understanding is that this is a very aggressive move and that it will do significant damage to the Chinese economy (and very possibly hurt the US economy as well, depending on how various things shake out). It’s also, I think, an unprecedented step in decoupling the economies of the two countries, and therefore a significant step toward a second cold war.
Prior to learning about this, my feeling about AI-focused effective altruists was that they were probably wasting a lot of money but not doing anything much worse than that. Now it seems like they are influencing global events in ways that are not widely known and are likely very bad.
If I had specific names and definitive proof I would probably be trying to talk to a journalist, but I don’t. What I do have is entirely convincing to me but due to privacy concerns I’m not going to share where this information is coming from. As such this is essentially a rumor, but even as a rumor I think it’s probably of interest to a bunch of people on Tumblr who are adjacent to that community.
230 notes · View notes
unknought · 1 year
Text
A point of clarification:
When I said “This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls” the reason for the hedging “probably” was that they might (as far as I know) be motivated by considerations like wanting to benefit US chip manufacturers, or wanting to damage the Chinese economy more generally, or other things that I don’t even know about. I don’t think that there is any real possibility that anyone high up in the administration and involved in making the final decision to impose the restrictions cares much about AI risk.
Effective altruists wrote the semiconductor export restrictions
The Biden administration recently imposed restrictions on exports to China of certain high-end microchips and technologies used to produce them. This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls, but the people who actually wrote the restrictions apparently had other motivations.
They were members of the effective altruist community and were motivated by the idea that if there is a single supply chain for the high-end chips (i.e. a US monopoly) it will become more politically feasible to impose restrictions on how those chips can be used. Specifically, that it might be possible to make the chips in such a way that they cannot be used to train powerful machine learning models. For context, the concern about AI among effective altruists is that a sufficiently advanced system might wipe out humanity.
I don’t know a lot about foreign policy but my understanding is that this is a very aggressive move and that it will do significant damage to the Chinese economy (and very possibly hurt the US economy as well, depending on how various things shake out). It’s also, I think, an unprecedented step in decoupling the economies of the two countries, and therefore a significant step toward a second cold war.
Prior to learning about this, my feeling about AI-focused effective altruists was that they were probably wasting a lot of money but not doing anything much worse than that. Now it seems like they are influencing global events in ways that are not widely known and are likely very bad.
If I had specific names and definitive proof I would probably be trying to talk to a journalist, but I don’t. What I do have is entirely convincing to me but due to privacy concerns I’m not going to share where this information is coming from. As such this is essentially a rumor, but even as a rumor I think it’s probably of interest to a bunch of people on Tumblr who are adjacent to that community.
230 notes · View notes