Hilary Greaves on definitions of existential risk

Generally, where humanity” appears in a definition of existential catastrophe, it is to be read as an abbreviation for Earth-originating intelligent sentient life”.

[…]

A little more fundamentally, the desideratum is that there exist, throughout the long future, large numbers of high-welfare entities. One way this could come about is if a takeover species is both highly intelligent and highly sentient, and spreads itself throughout the universe (with high-welfare conditions). But other ways are also possible. For example, takeover by non-sentient AI together with extinction of humanity need not constitute an existential catastrophe if the non-sentient AI itself goes on to create large numbers of high-welfare entities of some other kind.

https://globalprioritiesinstitute.org/wp-content/uploads/Concepts-of-existential-catastrophe-Hilary-Greaves.pdf

Quote ai catastrophic risk

September assorted links

Understanding LLMs:

AI policy:

AI alignment:

Philosophy:

Misc:

Links

Matt Ridley on innovation vs invention

Invention is the coming up with a prototype of a new device or a new social practice innovation. Innovation is the business of turning a new device into something practical, affordable and reliable that people will want to use and acquire. It’s the process of driving down the price; it’s the process of driving up the reliability and the efficiency of the device; and it’s the process of persuading other people to adopt it, too.

Quote entrepreneurship business

Richard Danzig on technology roulette and the U.S. military

This report recognizes the imperatives that inspire the U.S. military’s pursuit of technological superiority over all potential adversaries. These pages emphasize, however, that superiority is not synonymous with security. Experience with nuclear weapons, aviation, and digital information systems should inform discussion about current efforts to control artificial intelligence (AI), synthetic biology, and autonomous systems. In this light, the most reasonable expectation is that the introduction of complex, opaque, novel, and interactive technologies will produce accidents, emergent effects, and sabotage. In sum, on a number of occasions and in a number of ways, the American national security establishment will lose control of what it creates. 

A strong justification for our pursuit of technological superiority is that this superiority will enhance our deterrent power. But deterrence is a strategy for reducing attacks, not accidents; it discourages malevolence, not inadvertence. In fact, technological proliferation almost invariably closely follows technological innovation. Our risks from resulting growth in the number and complexity of interactions are amplified by the fact that proliferation places great destructive power in the hands of others whose safety priorities and standards are likely to be less ambitious and less well funded than ours. 

Accordingly, progress toward our primary goal, superiority, should be expected to increase rather than reduce collateral risks of loss of control. This report contends that, unfortunately, we cannot reliably estimate the resulting risks. Worse, there are no apparent paths for eliminating them or even keeping them from increasing. The benefit of an often referenced recourse, keeping humans in the loop” of operations involving new technologies, appears on inspection to be of little and declining benefit.

https://www.cnas.org/publications/reports/technology-roulette

Quote richard danzig catastrophic risk

Jerry Z Muller on left and right-wing totalitarianism in the early 20th century

The choice of the proletariat rather than the Volk or nation reflected a deeper divergence between intellectuals of the left and right, a divergence whose origin lay in the eighteenth century. It hinged upon differing answers to the question What is the source of the purposes that men ought to hold in common, and of the institutions that embody those purposes?”

Those who believed that the ultimate source of such purposes lay in reason,” which was capable in principle of providing answers for men and women in all times and places, arrayed themselves as the party of Enlightenment. Despite great internal divergences, it was the universalist, cosmopolitan thrust of its thought that united what Peter Gay has called the party of humanity”-a designation that recaptures the self-understanding of the philosophes. Though it may have shifted the locus of reason in the direction of the methods of natural science, the Enlightenment maintained the far older belief in a rational universe with a necessary harmony of values accessible to human reason. The theory and practice of enlightened absolutism, revolutionary republicanism, and the application of the Code Napoleon to non-French nations within the Napoleonic empire all shared the premise that because men were fundamentally the same everywhere, there were universal goals and universalizable institutions for their pursuit. In its universalism–its commitment to a proletariat that was to know no fatherland and whose interests were held to be identical with those of humanity–Marxism and its totalitarian communist variant were intellectual descendants of the Enlightenment. Indeed, it was the premise and promise of universalism that made communism disproportionately attractive to intellectuals who sprang from ethnic and religious minorities.

The intellectuals who placed their hopes on totalitarian movements of the right usually descended from what Isaiah Berlin has termed the Counter-Enlightenment.” Its advocates formed no party; what they shared was a skepticism toward the central dogma of the Enlightenment,” the belief that the ultimate ends of all men at all times were identical and could be apprehended by universal reason. The orientation of the thinkers of the Counter-Enlightenment was usually historicist and particularist. They regarded the attempt to discover universal standards of conduct as epistemologically flawed, and the attempt to impose such standards as a threat to the particular historical cultures from which individuals derived a sense of purpose and society a sense of cohesion. The thinkers of the Counter-Enlightenment regarded the variety of existing historical cultures as both inescapable and intrinsically valuable and suggested that the multiplicity of historical cultures embodied values that were incommensurable or equally valid. For thinkers of the German Counter-Enlightenment such as Johann Gottfried von Herder and Justus Möser, cosmopolitanism was the shedding of all that makes one most human, most oneself. Although by no means uniform in their politics, they all resisted the attempt at a rational reorganization of society based upon purportedly universal and rationalist ideals

In countering the claims of the Enlightenment, the most original thinkers of the Counter-Enlightenment enunciated claims that represented a major departure from the central stream of the Western tradition, which had held that problems of value were in principle soluble and soluble with finality. The Counter-Enlightenment, by contrast, maintained that the traditions which gave a group its identity and which were expressed in its culture were not themselves rationally grounded and could not always be rationally justified. The value of a culture or institution was conceived of as deriving from its history, from its role in the development of a particular group. Proponents of the Counter-Enlightenment such as Möser or Burke revived the argument of the Sophists that moral order was a product of convention that varied from group to group. To this they added the perception (pioneered by David Hume) that emotional attachment to an institution—what Burke called reverence”—was often a product of the institution’s longevity. This social-psychological perception lay at the heart of the argument for the functional value of continuity, a theme that provided the basis for numerous variations among later generations of conservative thinkers.

—The Other God That Failed (1988)

quote jerry z muller conservatism politics

Alien invaders, or children?

If we think of digital minds as alien invaders, we fight them to our last breath 1.

If we think of digital minds as our children, we would raise them with care and wish them well. And we could expect them to wish us well too.

Which is the better frame?

I think it mainly depends on:

  1. How much they share our values.
  2. Whether they are capable of living great, flourishing lives. Are they conscious?

Those who prefer the alien frame may also think that biological relation matters. Why?

  1. Loyalty to one’s species.
  2. Conservatism: ways of life are good just because they exist; therefore its bad if they are lost, and their replacement by better things doesn’t automatically make up for that.

What are the other cruxes?


  1. A consequentialist who holds an impartial theory of value might not. They might think that letting the aliens win would create a more valuable future.↩︎

Writing ai