Joe Carlsmith on expected utility maximisation

There’s nothing special about small probabilities—they’re just bigger conditional probabilities in disguise.

[…]

Suppose the situation is: 1000 people are drowning. A is a certainty of saving one of them, chosen at random. B is a 1% chance of saving all of them.

Thus, for each person, A gives them a .1% chance of living; whereas B gives them a 1% chance. So every single person wants you to choose B. Thus: if you’re not choosing B, what are you doing, and why are you calling it helping people”? Are you, maybe, trying to be someone who saved someone’s life,” at the cost of making everyone 10x less likely to live? F*** that.

[…]

If, in the face of a predictable loss, it’s hard to remember that e.g. you value saving a thousand lives a thousand times more than saving one, then you can remember, via coin-flips, that you value saving two twice as much as saving one, saving four twice as much as saving two, and so on.

[…]

There’s a vibe […] that’s fairly core to my own relationship with EUM: namely, something about understanding your choices as always taking a stance,” such that having values and beliefs is not some sort of optional thing you can do sometimes, when the world makes it convenient, but rather a thing that you are always doing, with every movement of your mind and body. And with this vibe in mind, I think, it’s easier to get past a conception of EUM as some sort of tool” you can use to make decisions, when you’re lucky enough to have a probability assignment and a utility function lying around — but which loses relevance otherwise. EUM is not about probabilities and utilities first, decisions second”; nor, even, need it be about decisions first, probabilities and utilities second,” as the but it’s not action-guiding!” objectors sometimes assume. Rather, it’s about a certain kind of harmony in your overall pattern of decisions — one that can be achieved by getting your probabilities and utilities together first, and then figuring out your decisions, but which can also be achieved by making sure your decision-making satisfies certain attractive conditions, and letting the probabilities and utilities flow from there. And in this latter mode, faced with a choice between e.g. X with certainty, vs. Y if heads (and nothing otherwise), one need not look for some independently specifiable unit of value to tally up and check whether Y has at least twice as much of it as X. Rather, to choose Y-if-heads, here, just is to decide that Y, to you, is at least twice as valuable as X.

I emphasize this partly because if — as I did — you turn towards the theorems I’ll discuss hoping to answer questions like would blah resources be better devoted to existential risk reduction or anti-malarial bednets?”, it’s important to be clear about what sort of answers to expect. There is, in fact, greater clarity to be had, here. But it won’t live your life for you (and certainly, it won’t tell you to accept some particular ethic — e.g., utilitarianism). Ultimately, you need to look directly at the stakes — at the malaria, at the size and value of the future — and at the rest of the situation, however shrouded in uncertainty. Are the stakes high enough? Is success plausible enough? In some brute and basic sense, you just have to decide.

https://handsandcities.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/ https://handsandcities.com/2022/03/18/on-expected-utility-part-2-why-it-can-be-ok-to-predictably-lose/

Quote joe carlsmith decision theory