Digital Minds or Butlerian Accord? (Part 1)

Two scenarios for 2100 that might be choiceworthy:

  1. Digital Minds: digital minds rule the Earth. There was a peaceful transition of power; humans think the 21st century went well and are optimistic about the future.
  2. Butlerian Accord: biological humans rule the Earth. They have agreed not to develop AI systems that replace them, and to regulate other technologies that greatly increase extinction risk.

Which would be preferable? Would either be acceptable?


Let’s imagine each these scenarios have come to pass (bracketing the question of how it actually happened).

If you like the Digital Minds scenario, you may think some or all of:

  1. Digital minds will live far better lives than biological humans; there will also be far more of them.
  2. Biological humans are overall better off in this scenario.
  3. Digital minds will have values somewhat (or very) different from our own, and that’s fine.
  4. We should think of digital minds as our descendants, not as an alien species.
  5. It would be good if our descendants become grabby and occupy a large local chunk of the Universe, rather than letting other civilisations take our place. Digital Minds” soon increases the chance of this.

If you dislike the Digital Minds scenario, you may think some or all of:

  1. Digital Minds will not be conscious, or otherwise capable of wellbeing.
  2. Digital Minds will have values somewhat (or very) different from our own, and those values will be worse, objectively speaking.
  3. Digital Minds will have values somewhat (or very) different from our own, and I don’t like that.
  4. We should think of digital minds as an other” that we dislike.
  5. Digital Minds will experience extraordinary suffering, or other kinds of disvalue.

If you like Butlerian Accord, you may think some or all of:

  1. The human condition is already pretty good at 21st century levels of technology. Same goes for non-human animals.
  2. A human civilisation based on a Butlerian Accord can be stable, and flourish for millennia.
  3. We should shift to Digital Minds” eventually, but not before we’ve thought long and hard about how to maximise the chance that shift goes well.

If you dislike Butlerian Accord, you may think some or all of:

  1. We currently live in a Darwinian hellscape; superintelligent digital minds are the only way out of that.
  2. A Butlerian Accord would not last.

What other big reasons are there to like or dislike each scenario?

writing ai futurism