Nick Bostrom on “turnkey totalitarianism”

Developing a system for turnkey totalitarianism means incurring a risk, even if one does not intend for the key to be turned.

One could try to reduce this risk by designing the system with appropriate technical and institutional safeguards. For example, one could aim for a system of structured transparency’ that prevents concentrations of power by organizing the information architecture so that multiple independent stakeholders must give their permission in order for the system to operate, and so that only the specific information that is legitimately needed by some decision-maker is made available to her, with suitable redactions and anonymization applied as the purpose permits. With some creative mechanism design, some machine learning, and some fancy cryptographic footwork, there might be no fundamental barrier to achieving a surveillance system that is at once highly effective at its official function yet also somewhat resistant to being subverted to alternative uses.

How likely this is to be achieved in practice is of course another matter, which would require further exploration. Even if a significant risk of totalitarianism would inevitably accompany a well-intentioned surveillance project, it would not follow that pursuing such a project would increase the risk of totalitarianism. A relatively less risky well-intentioned project, commenced at a time of comparative calm, might reduce the risk of totalitarianism by preempting a less-wellintentioned and more risky project started during a crisis. But even if there were some net totalitarianism-risk-increasing effect, it might be worth accepting that risk in order to gain the general ability to stabilize civilization against emerging Type-1 threats (or for the sake of other benefits that extremely effective surveillance and preventive policing could bring).

Quote nick bostrom futurism vulnerable world hypothesis

Naturalism, pragmatism, impartiality

Joshua Greene’s naturalism

Joshua Greene sees morality as part of the natural world 1. It emerges from evolution, because certain kinds of cooperation are adaptive.

In his breakdown:

  • Morality helps individuals cooperate 2. Codes of morality operate within groups, but vary between groups.
  • Metamorality” helps groups cooperate. For groups to cooperate, they need to find a common currency, even if they have quite different codes of morality.

On this picture, both morality and metamorality stem from a practical problem—how can creatures like us flourish, get along, and have many descendants?

What Greene calls modern ethics” is mostly concerned with the question: how can we improve our metamorality?

Greene, following Jeremy Bentham, thinks that the best candidate for a common currency to support metamorality is: quality of experience. If you 5-whys ask people why they care about something, it often comes down to the quality of their experience—crudely: whether it amounts to pleasure or suffering, an experience they would gladly choose or strive to avoid.

Naturalism and impartiality

Where does the ideal of impartiality fit into this picture?

Moral philosophers and people associated with the effective altruism movement often characterise impartiality as:

Giving equal weight to everyone’s interests.

This is often cached out in negative terms, as:

Trying to avoid giving undue weight to particular interests based on (putatively) non-morally-relevant factors such as race, gender, proximity in space and time, nationality, species membership, or substrate.

Usually, such lists are motivated by a positive claim about what kinds of things are morally-relevant, such as capacity for wellbeing, or sentience.

Let’s bracket the question of what our positive claim, and our list of morally-irrelevant factors, should look like. Let’s instead zoom in on the undue weight” part.

At the level of within-group morality, one can tell a story about how norms of (relatively) equal treatment and fairness could emerge as a stable equilbria, grounded upon (relatively) equal distributions of power in forager societies. 3

At the level of between-group metamorality, we can tell a similar story: groups will be reluctant to cooperate if they think their interests are unduly neglected (though they may in fact accept a bunch of unfair treatement, if their other options—e.g. conflict—seem worse).

So how do we cash out undue weight” in naturalistic terms? Well, the obvious candidate is: non-adaptive.

In moral philosophy and effective altruism, the impartiality story” is not told as though it’s based on evolutionary logic of adaptive bargains and long-run equilibria. The story is not we should be impartial in order to solve our co-operation problems and maximise reproductive fitness”. But rather, the idea is we should impartially promote the good because… well… that’s the right thing to do”.

This seems like a case where your metaethics, or the story you tell about what we’re up to when we do moral philosophy, rather matters.

Bernard Williams, discussing the limits to impartiality, approvingly quotes Max Stirner:

The tiger who attacks me is in the right, and so am I when I strike him down. I defend against him not my right, but myself.

—Max Stirner

Williams imagines that humanity is threatened by an alien civilisation, and suggests that in such a situation, it would not be appropriate ask:

  1. What outcome would maximise value according to our best impartial axiology?

For Williams as for Greene, our impartiality axiology is ultimately a tool that’s supposed to serve us, not an external imperative that we are supposed to serve. Thoughts to the contrary are a relic of a world not yet fully disenchanted”, i.e. the artefact of a non-secular worldview 4.

Rather, he thinks we should ask:

  1. What outcome would be best for us (or according to us)?

The choice between (a) and (b) isn’t just academic: we face situations where we must decide which of these questions to ask now, soon, and in the long-run:

  1. (now) Western liberalism vs Political Islam; West vs China; etc
  2. (soon?) Digital Minds
  3. (long-run) Alien civilisations

Naturalism vs non-naturalism; pragmatism and normativity

For the naturalist, moral and metamoral questions are quite empirical. In the current environment, which ideals actually work?

(The normativity within actually work” boils down to adaptive fitness”, whether we recognise this or not.)

As Bostrom & Schulman reminded us recently 5, we can reasonably reject the insistence of many philosophers that there’s a fundamental distinction between descriptive and normative claims. After reflecting on Pragmatism last autumn, I became more confident is that this distinction is not, in fact, fundemental. Rather, things are blurry—there’s no such thing as a purely descriptive claim, when we begin our enquiry we’re always already bound up in the normative project of being humans trying to get by in the world, with various aims and agendas baked in 6.

On the naturalistic perspective, we started out trying to solve a practical problem—how can we get along with groups with which, on the surface, we don’t share much in common—and ended up mistakenly thinking of ourselves as doing something else (seeking the (meta)moral truth, then following that). If that’s our self-image, we won’t be so concerned with empirics: we’ll just try whatever our culture and our moral philosophers come up with, and if it works” in the naturalistic sense, all good. Otherwise, we’ll get wiped out, perhaps gradually, perhaps quickly.

The non-naturalist impartialist thinks there is an external, non-human standard, so they are likely to be more interested in revisionary or revolutionary maximisation—maximisation of whatever they think is valuable, independently of humans—and chafing against the constraints imposed by being the kinds of creatures we find ourselves to be.

On the naturalistic perspective, we’ll be more concerned with thinking about what flavours of impartiality are going to work well for us over the long run. We’d be more inclined to, like the pragmatist, keep coming back to the question: what problem are we actually trying to solve here?


  1. See Moral Tribes or his interview with Sean Carroll.↩︎

  2. A set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation.” Moral Tribes p.23↩︎

  3. People seem to think that before the agricultural revolution, human tribes were much more egalitarian. C.f. Hanson on forager vs farmer morality.↩︎

  4. Derek Parfit, and other non-naturalists, would disagree, but despite some hunting, I’ve not found arguments for non-naturalism that strike me as persuasive.↩︎

  5. http://www.nickbostrom.com/papers/mountethics.pdf↩︎

  6. Nietzsche was also very clear on this.↩︎

writing moral philosophy impartiality pragmatism naturalism joshua greene bernard williams

Digital minds: descendents or rivals?

Should we think of digital minds as our descendents, or our rivals?

If they are descendents, we can think of them as children—different from us in important ways, but carrying on the flame.

If they are rivals, well—they are rivals. If they inherit the future, then we have, in some important sense, lost.

If you think that digital minds will inherit the future whether we like it or not, the main way in which this matters is how it affects your attitudes toward this future today. And so perhaps we should make more effort to like it.

writing digital minds

Iain McGilchrist on the left and right hemispheres

The left hemisphere is good at helping us manipulate the world, but not good at helping us to understand it. To just use this bit and then that bit, and then that bit. But the right hemisphere has a kind of sustained, broad, vigilant attention instead of this narrow, focused, piecemeal attention. And it sustains a sense of being, a continuous being, in the world. So, these are very different kinds of attention. And they bring into being for us quite different kinds of a world.

It is not so much what each hemisphere does, it’s the way in which it does it. By which, I don’t mean by what mechanism. I mean, the manner in which it does it. The two halves of the brain have, as we do, different goals, different values, different preferences, different ways of being.

In a nutshell: the left hemisphere has a map of the world, and the right hemisphere sees the terrain that is mapped. So, the right hemisphere is seeing an immensely complex, very hard-to-summarise, nonlinear, deeply embedded, changing, flowing, ramifying world. And in the other—the left-hemisphere take on the world—things are clear, sharp, distinct, dead, decontextualised, abstract, disembodied. And then they have to be put together, as you would put things together like building a machine in the garage.

https://www.econtalk.org/iain-mcgilchrist-on-the-divided-brain-and-the-master-and-his-emissary/

Quote iain mcgilchrist neuroscience mind

Pragmatism, evolution & moral philosophy

Those who grow passionate on one or the other side of arcane and seemingly pointless disputes are struggling with the question of what self-image it would be best for human beings to have. —Richard Rorty

Pragmatism casts philosophy in a different light. It sees philosophy—including moral philosophy—as just another thing that humans do to get by.

That’s to say: it situates philosophy within the glorious indifference of the natural world, the competition and selection, the evolving universe, the dust clouds, the equations of physics.

In general and on average over the long run 1, we see things as good or bad insofar as it helps us get by. And getting by” ultimately means: survive and reproduce. In the long run, your belief-value bundles have to be adaptive.

On this view, moral philosophy is not—as the non-naturalist moral realist would have it—a quest for values that are correct” independently of the evolutionary environment 2. Strange as it sounds, evolutionary processes are the source of normativity. Philosophers who hope to improve our values need to think about the messy, hard realities of adaptiveness, equilibria and economics at least as much as they think about moral principles we can all (currently) agree on. The former underwrite the latter 3.

Maladaptive belief-value bundles can emerge and persist for short periods, of course 4 5. I once heard the story of the Scottish Highlanders, whose inflexibility of custom kept them in relative poverty while the modernising Southerners became rich. Eventually, economic incentives for the Southerners led to the Highland Clearances, the forceful destruction of the Highlander way of life.

As the environment changes, our values change too, whether we like it or not.


  1. I’m not sure how best to caveat this.↩︎

  2. At this point, the non-naturalist says: surely pain is always instrinsically bad”, or pain is bad because of how it feels”. And the naturalist-pragmatist replies: sure, perhaps its adaptive to think this way. But it’s not quite the right way to think about things.↩︎

  3. It always seems like the non-naturalists don’t linger enough on the question of why they find certain truths about value to be self-evident.↩︎

  4. It’s almost a tautology to say that maladaptive behaviours don’t last. The key insight is that the reason they don’t last is competition from more adaptive behaviours. If you can prevent competition, you can get away with all sorts of not-maximally-adaptive behaviour. But the more not-maximally-adaptive behaviour you preserve, the more you risk your ability to prevent competition.↩︎

  5. I’m not sure how close we should expect cultures get to optimal” adaptiveness, even over the very long run. I guess they usually just approximate local maxima.↩︎

writing pragmatism moral philosophy evolution robin hanson richard rorty value drift

Tyler Cowen on axiology: I see the good as more holistic than additive-aggregative

These days, I see the good as more holistic than additive-aggregative. […] We can make some gross comparisons of better and worse at the macro level, with partial rankings at best, but for many individualized normative comparisons there simply isn’t a right answer.  I view ranking” as a luxury, occasionally available, rather than an axiomatic postulate which can be used to generate normative comparisons, and thus normative paradoxes, at will.  I see that response as different than allowing or embracing intransitivity across multiple alternatives and in that regard my final position differs from Temkin’s.  Furthermore, in a holistic approach, the pure micro welfare numbers” used to generate the paradoxical comparisons aren’t necessarily there in the first place but rather they have to be derived from our intuitions about the whole.

https://marginalrevolution.com/marginalrevolution/2012/01/rethinking-the-good-by-larry-temkin.html

quote tyler cowen moral philosophy axiology larry tempkin