Notes on Rao on Ooze

Some notes on Fear of Oozification.

I accept the evolutionary picture. So do Yudkowsky, Shulman, Hanson, Carlsmith and Bostrom.

Key disuputes:

  1. How capable are we of influencing evolutionary change (both speed and direction)?
  2. Would the results of such efforts be desirable (see e.g. here, here, here and here)?

We definitely have some capability to influence (at the social level it’s called governance; at the biological: homeostasis).

It’s a value question, and an empirical question, what kinds of governance (a.k.a. holding onto things we care about) we should go for.

On the value question: yes, valuing requires some attachment to the status quo. Yudkowsky, Schulman, Carlsmith and Bostrom (and, I’m fairly sure—Altman, Hassabis, Musk) understand this, and are more into conservative humanism than Rao and Hanson.

Most people prefer conservative humanism—even the transhumanists—so these values will keep winning until the accellerationist minority get an overwhelming power advantage (or civilisation collapses).

The accellerationists see humanism as parochial and speciesist. They love Spinoza’s God, and more fully submit to its will”.

Metaphysical moral realists like Parfit and Singer may find themselves siding with the accelerationists; it depends how well human-ish values track what objectively matters (what God wants”…).

Joe Carlsmith’s series, Otherness and Control in the Age of AGI, is one of the best things I’ve read on this stuff.


Empirically, these values lie on a spectrum, and neither extreme is sustainable. Max(conservative) means self-defeating fragility. Max(accelerationist) means ooze, because complexity requires local stability, and ooze eventually becomes primordial soup.


New, for me, was the idea that more powerful technology means selection dynamics at lower levels. E.g. when we train AIs we select over matrices, and with nanotech we’ll select over configurations of atoms. And yes, once that ball gets rolling, there’s an explosion of possibilities. It sounds like Rao thinks that this means a Singleton is unlikely, but I don’t understand why. Our attempts at scaffolding might well lead us there.


Rao doesn’t like the Shoggoth meme:

The shoggoth meme, in my opinion, is an entirely inappropriate imagining of AIs and our relationship to them. It represents AI in a form factor better suited to embodying culturally specific fears than either the potentialities of AI or its true nature. There is also a category error to boot: AIs are better thought of as a large swamp rather than a particular large creature in that swamp wearing a smiley mask.

To defend it: the swamp is the pretrained model. The Shoggoth is the fine-tuned and RHLF’d creature we interact with. The key thing is that the tuned and RHLF’d creature still has many heads; on occasion we’ll be surprised by heads we don’t like.

Writing evolution