Hilary Greaves on definitions of existential risk

Generally, where humanity” appears in a definition of existential catastrophe, it is to be read as an abbreviation for Earth-originating intelligent sentient life”.

[…]

A little more fundamentally, the desideratum is that there exist, throughout the long future, large numbers of high-welfare entities. One way this could come about is if a takeover species is both highly intelligent and highly sentient, and spreads itself throughout the universe (with high-welfare conditions). But other ways are also possible. For example, takeover by non-sentient AI together with extinction of humanity need not constitute an existential catastrophe if the non-sentient AI itself goes on to create large numbers of high-welfare entities of some other kind.

https://globalprioritiesinstitute.org/wp-content/uploads/Concepts-of-existential-catastrophe-Hilary-Greaves.pdf

Quote ai catastrophic risk