Reading Robin Hanson

Tyler Cowen characterises Robin Hanson and Thomas Malthus as thinkers of constraints”. I read them both while wondering: what constraints apply to moral philosophy?

In This is the Dream Time, Robin Hanson claims that we are living through an unusual period of abundance. Usually, in the natural world, when the available resources grow, population grows too, so after a brief period of abundance, most members of the species revert to subsistence level. Since the industrial revolution, the resources available to humanity have been growing much faster than population, and so, Hanson suggests, we should think of ourselves as living through one of these unusual periods of abundance. In such periods, competitive pressures are eased, and we can sustain all sorts of belief-behaviour packages that do not maximise our rate of reproduction.

How’s this for a prediction:

Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.

In the future, we should expect population and wealth growth rates to converge again, and for most people to once again live at subsistence levels—unless we co-ordinate to constrain the reproduction rate.

Hanson predicts that our descendents will eventually explictly endorse maximising their reproduction rate as a value. The basic thought:

Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). […] with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants.

During a conversation with Agnes Callard, Hanson says:

I do tend to think natural selection, or selection, will just be a continuing force for a long time. And the main alternative is governance. I actually think one of the main choices that we will have, and the future will have, is the choice between allowing competition or replacing it with governance.

The question of where and how to push back against the logic of competition and selection will be central over the long-term. Co-ordinating humans on earth to make such decisions will be hard, but perhaps not impossible. Co-ordinating with other civilisations from other galaxies… seems very hard, but perhaps not impossible.

On this perspective, there’s a direct tradeoff between sustaining non-adaptive values we care about over the short-term and sustaining our existence over the long-term. Invest too much in non-adaptive values, and the long-term viability of your group will be threatened by another group that is less invested in these values.

Attitudes towards this evolutionary perspective vary. Hanson does not consider this model of things that dark”, though he concedes it is not as bright as the unicorns and fairies that fill dream-time visions”. In fact, Hanson is concerned about a future where our descendents restrain competitive dynamics too much:

When I try to do future analysis one of the biggest contrary assumptions or scenarios that I focus on is: what if we end up creating a strong world government that strongly regulates investments, reproduction and other sorts of things, and thereby prevents the evolutionary environment in which the evolutionary analysis applies. And I’m very concerned about that scenario. That is my best judgement of our biggest long term risk […] the creation of a strong civilisation-wide government that is going to be wary of competition and wary of allowing independent choices and probably wary of allowing interstellar colonisation. That is, this vast expansion into the universe could well be prevented by that.

Eliezer Yudkowksy and Carl Schulman, by contrast, are more keen on the idea of replacing competition with governance, and I think Nick Bostrom is too.

Reading Hanson, and reading more about Pragmatism, has left me with a greater awareness that, over the long run, value systems need to promote survival, reproduction and resource accumulation, or else reliably prevent competition in these domains. If they don’t, they will be replaced with those that do.

If you’re in the business of reflecting on how we should change our values, you probably want to bear this in mind.

One option is to relax the longevity demand. We could settle for pursuing things we care about over the short run, and accept that in the long run, things will come to seem weird and bad by our current lights (but hopefully fine to our (very different) descendents).

writing robin hanson evolution futurism moral philosophy pragmatism parenting

Robin Hanson on his methods

My usual first tool of analysis is competition and selection.

To predict what rich creatures do, you need to know what they want. To predict what poor creatures do, you just need to know what they need to do to survive.

Looking back through history it is clear that humanity has not been driving the train. There has been this train of progress or change and it has been a big fast train, especially lately, and it is making enormous changes all through the world but it is not what we would choose if we sat down and discussed it or voted. We just don’t have a process for doing that.

Whatever processes that changed things in the past will continue. So I can use those processes to predict what will happen. I am assuming we will continue to have a world with many actions being taken for local reasons as they previously were. But that’s a way to challenge my Age of Em hypothesis: you can say no, we will between now and then acquire an ability to foresee the consequences of such changes and to talk together and to vote together on do we want it, and we will have the ability to implement such choices and that will be a change in the future that will prevent the Age of Em.

https://notunreasonable.com/2022/03/21/robin-hanson-on-distant-futures-and-aliens%ef%bf%bc/

quote robin hanson futurism

Dominic Cummings: given they don’t take nuclear weapons seriously, never assume they’re taking X seriously

For many years I’ve said that a Golden Rule of politics is that, given our leaders don’t take nuclear weapons seriously, never assume they’re taking X seriously and there is a team deployed on X with the incentives and skills to succeed.

People think this is an overstated metaphor but I always meant it literally.

Having explored the nuclear enterprise with deep state officials 2019-20, I can only stress just how extremely literally I mean this Golden Rule.

https://dominiccummings.substack.com/p/people-ideas-machines-ii-catastrophic

quote dominic cummings politics state capacity

Joe Carlsmith on expected utility maximisation

There’s nothing special about small probabilities—they’re just bigger conditional probabilities in disguise.

[…]

Suppose the situation is: 1000 people are drowning. A is a certainty of saving one of them, chosen at random. B is a 1% chance of saving all of them.

Thus, for each person, A gives them a .1% chance of living; whereas B gives them a 1% chance. So every single person wants you to choose B. Thus: if you’re not choosing B, what are you doing, and why are you calling it helping people”? Are you, maybe, trying to be someone who saved someone’s life,” at the cost of making everyone 10x less likely to live? F*** that.

[…]

If, in the face of a predictable loss, it’s hard to remember that e.g. you value saving a thousand lives a thousand times more than saving one, then you can remember, via coin-flips, that you value saving two twice as much as saving one, saving four twice as much as saving two, and so on.

[…]

There’s a vibe […] that’s fairly core to my own relationship with EUM: namely, something about understanding your choices as always taking a stance,” such that having values and beliefs is not some sort of optional thing you can do sometimes, when the world makes it convenient, but rather a thing that you are always doing, with every movement of your mind and body. And with this vibe in mind, I think, it’s easier to get past a conception of EUM as some sort of tool” you can use to make decisions, when you’re lucky enough to have a probability assignment and a utility function lying around — but which loses relevance otherwise. EUM is not about probabilities and utilities first, decisions second”; nor, even, need it be about decisions first, probabilities and utilities second,” as the but it’s not action-guiding!” objectors sometimes assume. Rather, it’s about a certain kind of harmony in your overall pattern of decisions — one that can be achieved by getting your probabilities and utilities together first, and then figuring out your decisions, but which can also be achieved by making sure your decision-making satisfies certain attractive conditions, and letting the probabilities and utilities flow from there. And in this latter mode, faced with a choice between e.g. X with certainty, vs. Y if heads (and nothing otherwise), one need not look for some independently specifiable unit of value to tally up and check whether Y has at least twice as much of it as X. Rather, to choose Y-if-heads, here, just is to decide that Y, to you, is at least twice as valuable as X.

I emphasize this partly because if — as I did — you turn towards the theorems I’ll discuss hoping to answer questions like would blah resources be better devoted to existential risk reduction or anti-malarial bednets?”, it’s important to be clear about what sort of answers to expect. There is, in fact, greater clarity to be had, here. But it won’t live your life for you (and certainly, it won’t tell you to accept some particular ethic — e.g., utilitarianism). Ultimately, you need to look directly at the stakes — at the malaria, at the size and value of the future — and at the rest of the situation, however shrouded in uncertainty. Are the stakes high enough? Is success plausible enough? In some brute and basic sense, you just have to decide.

https://handsandcities.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/ https://handsandcities.com/2022/03/18/on-expected-utility-part-2-why-it-can-be-ok-to-predictably-lose/

Quote joe carlsmith decision theory

Francis Bacon on pragmatism

The roads to human power and to human knowledge lie close together, and are nearly the same; nevertheless, on account of the pernicious and inveterate habit of dwelling on abstractions, it is safer to begin and raise the sciences from those foundations which have relation to practice and let the active part be as the seal which prints and determines the contemplative counterpart.

https://en.wikisource.org/wiki/Novum_Organum/Book_II_(Spedding)

quote francis bacon pragmatism

Adam Waytz on The Illusion of Explanatory Depth

If you asked one hundred people on the street if they understand how a refrigerator works, most would respond, yes, they do. But ask them to then produce a detailed, step-by-step explanation of how exactly a refrigerator works and you would likely hear silence or stammering. This powerful but inaccurate feeling of knowing is what Leonid Rozenblit and Frank Keil in 2002 termed, the illusion of explanatory depth (IOED), stating, Most people feel they understand the world with far greater detail, coherence, and depth than they really do.”

Rozenblit and Keil initially demonstrated the IOED through multi-phase studies. In a first phase, they asked participants to rate how well they understood artifacts such as a sewing machine, crossbow, or cell phone. In a second phase, they asked participants to write a detailed explanation of how each artifact works, and afterwards asked them re-rate how well they understand each one. Study after study showed that ratings of self-knowledge dropped dramatically from phase one to phase two, after participants were faced with their inability to explain how the artifact in question operates. Of course, the IOED extends well beyond artifacts, to how we think about scientific fields, mental illnesses, economic markets and virtually anything we are capable of (mis)understanding.

https://www.edge.org/response-detail/27117

Quote psychology epistemology