Eliezer Yudkowsky (1999) on a world with nanotechnology

Unless you've heard of nanotechnology, it's hard to appreciate the magnitude of the changes we're talking about.  Total control of the material world at the molecular level is what the conservatives in the futurism business are predicting.

Taking 10^17 ops/sec as the figure for the computing power used by a human brain, and using optimized atomic-scale hardware, we could run the entire human race on one gram of matter, running at a rate of one million subjective years every second.

Frequently Asked Questions about the Meaning of Life

Tom Holland on Paul’s Letter to the Galatians

COWEN: Which Gospel do you view as most foundational for Western liberalism and why?

HOLLAND: I think that that is a treacherous question to ask because it implies that there would be a coherent line of descent from any one text that can be traced like that. I think that the line of descent that leads from the Gospels and from the New Testament and from the Bible and, indeed, from the entire corpus of early Christian texts to modern liberalism is too confused, too much of a swirl of influences for us to trace it back to a particular text.

If I had to choose any one book from the Bible, it wouldn’t be a Gospel. It would probably be Paul’s Letter to the Galatians because Paul’s Letter to the Galatians contains the famous verse that there is no Jew or Greek, there is no slave or free, there is no man or woman in Christ. In a way, that text — even if you bracket out and remove the “in Christ” from it — that idea that, properly, there should be no discrimination between people of different cultural and ethnic backgrounds, based on gender, based on class, remains pretty foundational for liberalism to this day.

I think that liberalism, in so many ways, is a secularized rendering of that extraordinary verse. But I think it’s almost impossible to avoid metaphor when thinking about what the relationship is of these biblical texts, these biblical verses to the present day. I variously compared Paul, in particular in his letters and his writings, rather unoriginally, to an acorn from which a mighty oak grows.

But I think actually, more appropriately, of a depth charge released beneath the vast fabric of classical civilization. And the ripples, the reverberations of it are faint to begin with, and they become louder and louder and more and more disruptive. Those echoes from that depth charge continue to reverberate to this day.

Otto Petras on the conditions for religious awe

A religion that one understands is, for he who understands, no longer a religion. For by comprehending it, he stands above it; he surveys its conditions and possibilities, and to the extent that he does so he no longer feels like the unconditional object of religious demands. One can be possessed and awe-struck only as long as one does not understand how and why that occurs.

Richard Meadows & Nassim Taleb on FU money

Humphrey Bogart used to keep a $100 bill in his dresser drawer at all times—a decent chunk of change in the 1920s. He referred to it as his ‘fuck-you money’, because it meant he’d never be forced to take a crappy part. According to Bogie, the only good reason for making money was “so you can tell any son-of-a-bitch in the world to go to hell”.

Richard Meadows


A sum large enough to get most, if not all, of the advantages of wealth (the most important one being independence and the ability to only occupy your mind with matters that interest you) but not its side effects, such as having to attend a black-tie charity event and being forced to listen to a polite exposition of the details of a marble-rich house renovation.

Money buys freedom: intellectual freedom, freedom to choose who you vote for, to choose what you want to do professionally. But having what I call “fuck you” money requires a huge amount of discipline. The minute you go a penny over, then you lose your freedom again.

— Nassim Taleb

The last homily of Pope Pius XII

Our mouths are filled with the word "love".

"But I, before anyone else, didn't know how to define it.

Our mouths are filled with the word "beauty".

But I, before anyone else, didn't know how to receive it.

For this, I ask you forgiveness.

Please, forgive me.

At times we confound love with madness.

Beauty with ecstasy.

History has repeated itself.

Madness and ecstasy have once again proven to be irresistible temptations, but they always end the way they did on Ventotene.

With unjust death.

In this case, of a good and innocent priest.

There is a life of happiness to be found in the sphere of gentleness, kindness, mildness, lovingness.

We must learn to be in the world.

And the Church must contemplate the idea of opening up to the love that is possible, in order to fight against the love that is aberrant.

All this, John Paul III, with great humility, calls "the middle way."

In the past few days I have understood.

It's not the middle way.

It is the way.

Ever since I came back, you've been asking yourselves all sorts of questions.

Is he the father or the son?

Is he God or the Holy Spirit?

Is he man or is he Jesus?

Did he wake up or did he rise from the dead?

Is he a saint or is he an imposter?

Is he Christ or is he the Antichrist?

Is he alive or is he dead?

It doesn't matter.

You know what is so beautiful about questions?

It's that we don't have the answers.

In the end, only God has the answers.

They are his secret.

God's secret, which only He knows.

That is the mystery in which we believe.

And that is the mystery which guides our conscience.

And now I would like to come down among you, and do what I have wanted to do since the first moment: embrace you, one by one.

Marc Andreessen on the heros we're allowed to have

The anti-hero is the portrait of the Nietzschean superhero that we are allowed to have. Tony Soprano, Walter White, Don Draper. We can have someone who does Great Things, so long as that person is fundamentally bad by the standards of modern morality.

We are not allowed to have the full version of the Nietzschean superman doing something outstanding. We're not allowed Napoleon figures, the building of the pyramids, Beethoven, or even the person who built the transcontinental railroad, the car industry, that sort of thing.

The full Nietzschean superman is the person who says 'I really am going to rule the world, and rule it much better'. Those narratives are gone. They're too scary. They're absolutely frightening, because if we rediscover that kind of morality it would upend our entire order.

Email to Tyler on Bernard Williams & effective altruism

In your MacAskill interview, and again in the St Andrews talk, I heard you channeling Bernard Williams on Philosophy as a Humanistic Discipline and especially "The Human Prejudice".

I agree that Williams on philosophy and impartiality is an important message for EA. I pushed this line in conversations with Will MacAskill and others in 2015, and with several other Oxford figures since then. I'm surely not the ideal advocate, but in the replies I mostly heard a lot of "ugh, Bernard" followed by weak arguments against superficial misreadings of his work. People seemed very much in the mode of "devalue and dismiss".

My best EA Forum post is also my least popular:

https://forum.effectivealtruism.org/posts/G6EWTrArPDf74sr3S/bernard-williams-ethics-and-the-limits-of-impartiality

Williams' low status within EA is surprising given how seriously Derek Parfit took him as a peer. I understand that Williams was often seen as using non-kosher methods and unkind remarks in his philosophical writing and conversation, and was intensely disliked by some of his peers. So I suspect that much of his neglect is driven by residual animosity in the Oxford crowd. But they ignore this kind of thing and just "take the ideas seriously"... right...?

There are some notable exceptions. For example, Thomas Moynihan is somewhat associated with the Oxford EA scene, and appropriately rates Bernard Williams. Unsurprisingly, Tom has a background in "continental" philosophy.

You've not blogged much about Williams. How about it? E.g.

(1) Was Williams a pragmatist in denial, per Rorty's review of Truth and Truthfulness? Why did he resist Rorty?

(2) What prioritisation errors are made by those who go too far with impartiality? 

On (2): if EAs stopped going "too far" with impartiality, I think we'd see the EA portfolio shift a bit towards catastrophic risk and away from existential risk. The current strong focus on x-risk can be seen as another form of the 51:49 bet.

A couple years ago one of the more influential EAs told me that rejecting the 51:49 bet is a form of egoism. We should not care about our personal chances of survival: we should just follow the rule that maximises EV across all possible worlds. I replied that ecological rationality beats axiomatic rationality in the world I care about. But if you think impartial reasons are the only reasons that count, you can't justify your "arbitrary" care for this particular world over others.

And with that—and your remarks on the useful generativity of a mistake taken seriously—we're back to Nietzsche's remarks on Plato:

It seems that in order to inscribe themselves upon the heart of humanity with everlasting claims, all great things have first to wander about the earth as enormous and awe- inspiring caricatures: dogmatic philosophy has been a caricature of this kind--for instance, the Vedanta doctrine in Asia, and Platonism in Europe. Let us not be ungrateful to it, although it must certainly be confessed that the worst, the most tiresome, and the most dangerous of errors hitherto has been a dogmatist error--namely, Plato's invention of Pure Spirit and the Good in Itself. But now when it has been surmounted, when Europe, rid of this nightmare, can again draw breath freely and at least enjoy a healthier--sleep, we, WHOSE DUTY IS WAKEFULNESS ITSELF, are the heirs of all the strength which the struggle against this error has fostered. It amounted to the very inversion of truth, and the denial of the PERSPECTIVE--the fundamental condition--of life, to speak of Spirit and the Good as Plato spoke of them; indeed one might ask, as a physician: "How did such a malady attack that finest product of antiquity, Plato? Had the wicked Socrates really corrupted him? Was Socrates after all a corrupter of youths, and deserved his hemlock?" But the struggle against Plato, or--to speak plainer, and for the "people"--the struggle against the ecclesiastical oppression of millenniums of Christianity (FOR CHRISITIANITY IS PLATONISM FOR THE "PEOPLE"), produced in Europe a magnificent tension of soul, such as had not existed anywhere previously; with such a tensely strained bow one can now aim at the furthest goals. 

Peter

P.S. Nietzsche's thoughts on effective altruism, according to ChatGPT.

Jonathan Bi on how to live with a Girardian worldview

I compared Girard to my Virgil in the sense that he was able to rescue me through Hell. He was able to show me how to purge more milder forms of perversion.

But, just as Virgil couldn't take Dante all the way to heaven, neither could Girard. Girard kind of just retreats.

What I'm about to share with you is mostly my own creative interpretations on top of Girard.

I think there's in general two solutions, once you've identified there's a metaphysical and there's a physical desire. One wing, and I think this is what Girard leans to, is to say this metaphysical--this is the Buddhist as well as the Girardian way--is to say this metaphysical desire, this desire for being, it's completely perverse. It's _always_perverse, whether from Girard's perspective, because it's essentially a desire to be God. This is why it's satanic. You're desiring persistence; you're desiring power; you're desiring reality. If you push those far enough, those are the metaphysical qualities of the Judeo-Christian God. So, Girard actually sees metaphysical desire as the original sin, as the satanic drive to rival God in his metaphysical splendor.

And the Buddhists--right--we don't have to go into that, but long story short, these metaphysical qualities are not possible in the world. Emptiness is what permeates the world. So, this is a fundamentally wrong sort of desire.

So, for the Christians and Buddhists, the way to good health is to completely get rid of metaphysical desire, to be only concerned by the object physical desire.

There's another, however, strand of thinking, and probably most popular amongst the Germans, in Hegel, is to say there is actually a healthy way--the Germans, and Plato actually, which we'll talk about--there actually is a healthy way to exist in society. And the way, long story short, to do so is for your metaphysical and your physical desires to align.

That is to say: if you really like to do philosophy, don't hang out with a bunch of people who are industrialists. Hang out with a bunch of philosophers, so that the somewhat partial spectator, as we've discussed, will naturally _align_with your normative values, with your physical desires, and thus you'll receive recognition and a form of reality.

https://www.econtalk.org/johnathan-bi-on-mimesis-and-rene-girard/

Nick Bostrom on differential technological development

The Principle of Differential Technological Development

Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies Bostrom, 2002).

The principle of differential technological development is compatible with plausible forms of technological determinism. For example, even if it were ordained that all technologies that can be developed will be developed, it can still matter when they are developed. The order in which they arrive can make an important difference – ideally, protective technologies should come before the destructive technologies against which they protect; or, if that is not possible, then it is desirable that the gap be minimized so that other countermeasures (or luck) may tide us over until robust protection become available. The timing of an invention also influences what sociopolitical context the technology is born into. For example, if we believe that there is a secular trend toward civilization becoming more capable of handling black balls, then we may want to delay the most risky technological developments, or at least abstain from accelerating them.

Nick Bostrom on "turnkey totalitarianism"

Developing a system for turnkey totalitarianism means incurring a risk, even if one does not intend for the key to be turned.

One could try to reduce this risk by designing the system with appropriate technical and institutional safeguards. For example, one could aim for a system of ‘structured transparency’ that prevents concentrations of power by organizing the information architecture so that multiple independent stakeholders must give their permission in order for the system to operate, and so that only the specific information that is legitimately needed by some decision-maker is made available to her, with suitable redactions and anonymization applied as the purpose permits. With some creative mechanism design, some machine learning, and some fancy cryptographic footwork, there might be no fundamental barrier to achieving a surveillance system that is at once highly effective at its official function yet also somewhat resistant to being subverted to alternative uses.

How likely this is to be achieved in practice is of course another matter, which would require further exploration. Even if a significant risk of totalitarianism would inevitably accompany a well-intentioned surveillance project, it would not follow that pursuing such a project would increase the risk of totalitarianism. A relatively less risky well-intentioned project, commenced at a time of comparative calm, might reduce the risk of totalitarianism by preempting a less-wellintentioned and more risky project started during a crisis. But even if there were some net totalitarianism-risk-increasing effect, it might be worth accepting that risk in order to gain the general ability to stabilize civilization against emerging Type-1 threats (or for the sake of other benefits that extremely effective surveillance and preventive policing could bring).

Naturalism, pragmatism, impartiality

Joshua Greene's naturalism

Joshua Greene sees morality as part of the natural world [^1]. It emerges from evolution, because certain kinds of cooperation are adaptive.

[^1]: See Moral Tribes or his interview with Sean Carroll.

In his breakdown:

  • Morality helps individuals cooperate [^2]. Codes of morality operate within groups, but vary between groups.
  • "Metamorality" helps groups cooperate. For groups to cooperate, they need to find a common currency, even if they have quite different codes of morality.

[^2]: "A set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation." Moral Tribes p.23

On this picture, both morality and metamorality stem from a practical problem—how can creatures like us flourish, get along, and have many descendants?

What Greene calls "modern ethics" is mostly concerned with the question: how can we improve our metamorality?

Greene, following [Jeremy Bentham](https://notes.pjh.is/=Jeremy Bentham), thinks that the best candidate for a common currency to support metamorality is: quality of experience. If you 5-whys ask people why they care about something, it often comes down to the quality of their experience—crudely: whether it amounts to pleasure or suffering, an experience they would gladly choose or strive to avoid.

Naturalism and impartiality

Where does the ideal of impartiality fit into this picture?

Moral philosophers and people associated with the effective altruism movement often characterise impartiality as:

Giving equal weight to everyone's interests.

This is often cached out in negative terms, as:

Trying to avoid giving undue weight to particular interests based on (putatively) non-morally-relevant factors such as race, gender, proximity in space and time, nationality, species membership, or substrate.

Usually, such lists are motivated by a positive claim about what kinds of things are morally-relevant, such as capacity for wellbeing, or sentience.

Let's bracket the question of what our positive claim, and our list of morally-irrelevant factors, should look like. Let's instead zoom in on the "undue weight" part.

At the level of within-group morality, one can tell a story about how norms of (relatively) equal treatment and fairness could emerge as a stable equilbria, grounded upon (relatively) equal distributions of power in forager societies. [^3]

[^3]: People seem to think that before the agricultural revolution, human tribes were much more egalitarian. C.f. Hanson on forager vs farmer morality.

At the level of between-group metamorality, we can tell a similar story: groups will be reluctant to cooperate if they think their interests are unduly neglected (though they may in fact accept a bunch of unfair treatement, if their other options—e.g. conflict—seem worse).

So how do we cash out "undue weight" in naturalistic terms? Well, the obvious candidate is: non-adaptive.

In moral philosophy and effective altruism, the "impartiality story" is not told as though it's based on evolutionary logic of adaptive bargains and long-run equilibria. The story is not "we should be impartial in order to solve our co-operation problems and maximise reproductive fitness". But rather, the idea is "we should impartially promote the good because... well... that's the right thing to do".

This seems like a case where your metaethics, or the story you tell about what we're up to when we do moral philosophy, rather matters.

Bernard Williams, discussing the limits to impartiality, approvingly quotes Max Stirner:

The tiger who attacks me is in the right, and so am I when I strike him down. I defend against him not my right, but myself.

—Max Stirner

Williams imagines that humanity is threatened by an alien civilisation, and suggests that in such a situation, it would not be appropriate ask:

(a) What outcome would maximise value according to our best impartial axiology?

For Williams as for Greene, our impartiality axiology is ultimately a tool that's supposed to serve us, not an external imperative that we are supposed to serve. Thoughts to the contrary are "a relic of a world not yet fully disenchanted", i.e. the artefact of a non-secular worldview [^4].

[^4]: Derek Parfit, and other non-naturalists, would disagree, but despite some hunting, I've not found arguments for non-naturalism that strike me as persuasive.

Rather, he thinks we should ask:

(b) What outcome would be best for us (or according to us)?

The choice between (a) and (b) isn't just academic: we face situations where we must decide which of these questions to ask now, soon, and in the long-run:

  1. (now) Western liberalism vs Political Islam; West vs China; etc
  2. (soon?) Digital Minds
  3. (long-run) Alien civilisations

Naturalism vs non-naturalism; pragmatism and normativity

For the naturalist, moral and metamoral questions are quite empirical. In the current environment, which ideals actually work?

(The normativity within "actually work" boils down to "adaptive fitness", whether we recognise this or not.)

As Bostrom & Schulman reminded us recently ^5, we can reasonably reject the insistence of many philosophers that there's a fundamental distinction between descriptive and normative claims. After reflecting on Pragmatism last autumn, I became more confident is that this distinction is not, in fact, fundemental. Rather, things are blurry—there's no such thing as a purely descriptive claim, when we begin our enquiry we're always already bound up in the normative project of being humans trying to get by in the world, with various aims and agendas baked in [^6].

[^6]: Nietzsche was also very clear on this.

On the naturalistic perspective, we started out trying to solve a practical problem—how can we get along with groups with which, on the surface, we don't share much in common—and ended up mistakenly thinking of ourselves as doing something else (seeking the (meta)moral truth, then following that). If that's our self-image, we won't be so concerned with empirics: we'll just try whatever our culture and our moral philosophers come up with, and if it "works" in the naturalistic sense, all good. Otherwise, we'll get wiped out, perhaps gradually, perhaps quickly.

The non-naturalist impartialist thinks there is an external, non-human standard, so they are likely to be more interested in revisionary or revolutionary maximisation—maximisation of whatever they think is valuable, independently of humans—and chafing against the constraints imposed by being the kinds of creatures we find ourselves to be.

On the naturalistic perspective, we'll be more concerned with thinking about what flavours of impartiality are going to work well for us over the long run. We'd be more inclined to, like the pragmatist, keep coming back to the question: what problem are we actually trying to solve here?

Digital minds: descendents or rivals?

Should we think of digital minds as our descendents, or our rivals?

If they are descendents, we can think of them as children—different from us in important ways, but carrying on the flame.

If they are rivals, well—they are rivals. If they inherit the future, then we have, in some important sense, lost.

If you think that digital minds will inherit the future whether we like it or not, the main way in which this matters is how it affects your attitudes toward this future today. And so perhaps we should make more effort to like it.

Iain McGilchrist on the left and right hemispheres

The left hemisphere is good at helping us manipulate the world, but not good at helping us to understand it. To just use this bit and then that bit, and then that bit. But the right hemisphere has a kind of sustained, broad, vigilant attention instead of this narrow, focused, piecemeal attention. And it sustains a sense of being, a continuous being, in the world. So, these are very different kinds of attention. And they bring into being for us quite different kinds of a world.

It is not so much what each hemisphere does, it's the way in which it does it. By which, I don't mean by what mechanism. I mean, the manner in which it does it. The two halves of the brain have, as we do, different goals, different values, different preferences, different ways of being.

In a nutshell: the left hemisphere has a map of the world, and the right hemisphere sees the terrain that is mapped. So, the right hemisphere is seeing an immensely complex, very hard-to-summarise, nonlinear, deeply embedded, changing, flowing, ramifying world. And in the other—the left-hemisphere take on the world—things are clear, sharp, distinct, dead, decontextualised, abstract, disembodied. And then they have to be put together, as you would put things together like building a machine in the garage.

https://www.econtalk.org/iain-mcgilchrist-on-the-divided-brain-and-the-master-and-his-emissary/

Pragmatism, evolution & moral philosophy

Those who grow passionate on one or the other side of arcane and seemingly pointless disputes are struggling with the question of what self-image it would be best for human beings to have. —Richard Rorty

Pragmatism casts philosophy in a different light. It sees philosophy—including moral philosophy—as just another thing that humans do to get by.

That's to say: it situates philosophy within the glorious indifference of the natural world, the competition and selection, the evolving universe, the dust clouds, the equations of physics.

In general and on average over the long run [^1], we see things as good or bad insofar as it helps us get by. And "getting by" ultimately means: survive and reproduce. In the long run, your belief-value bundles have to be adaptive.

On this view, moral philosophy is not—as the non-naturalist moral realist would have it—a quest for values that are "correct" independently of the evolutionary environment [^2]. Strange as it sounds, evolutionary processes are the source of normativity. Philosophers who hope to improve our values need to think about the messy, hard realities of adaptiveness, equilibria and economics at least as much as they think about moral principles we can all (currently) agree on. The former underwrite the latter [^3].

Maladaptive belief-value bundles can emerge and persist for short periods, of course [^4] [^5]. I once heard the story of the Scottish Highlanders, whose inflexibility of custom kept them in relative poverty while the modernising Southerners became rich. Eventually, economic incentives for the Southerners led to the Highland Clearances, the forceful destruction of the Highlander way of life.

As the environment changes, our values change too, whether we like it or not.

https://upload.wikimedia.org/wikipedia/commons/8/86/Vuiamor2.jpg

[^1]: I'm not sure how best to caveat this. [^2]: At this point, the non-naturalist says: "surely pain is always instrinsically bad", or "pain is bad because of how it feels". And the naturalist-pragmatist replies: sure, perhaps its adaptive to think this way. But it's not quite the right way to think about things. [^3]: It always seems like the non-naturalists don't linger enough on the question of why they find certain truths about value to be self-evident. [^4]: It's almost a tautology to say that maladaptive behaviours don't last. The key insight is that the reason they don't last is competition from more adaptive behaviours. If you can prevent competition, you can get away with all sorts of not-maximally-adaptive behaviour. But the more not-maximally-adaptive behaviour you preserve, the more you risk your ability to prevent competition. [^5]:I'm not sure how close we should expect cultures get to "optimal" adaptiveness, even over the very long run. I guess they usually just approximate local maxima.

Tyler Cowen on axiology: I see the good as more holistic than additive-aggregative

These days, I see the good as more holistic than additive-aggregative. [...] We can make some gross comparisons of better and worse at the macro level, with partial rankings at best, but for many individualized normative comparisons there simply isn’t a right answer.  I view “ranking” as a luxury, occasionally available, rather than an axiomatic postulate which can be used to generate normative comparisons, and thus normative paradoxes, at will.  I see that response as different than allowing or embracing intransitivity across multiple alternatives and in that regard my final position differs from Temkin’s.  Furthermore, in a holistic approach, the “pure micro welfare numbers" used to generate the paradoxical comparisons aren’t necessarily there in the first place but rather they have to be derived from our intuitions about the whole.

https://marginalrevolution.com/marginalrevolution/2012/01/rethinking-the-good-by-larry-temkin.html

Reading Robin Hanson

Tyler Cowen characterises Robin Hanson and Thomas Malthus as "thinkers of constraints". I read them both while wondering: what constraints apply to moral philosophy?

In This is the Dream Time, Robin Hanson claims that we are living through an unusual period of abundance. Usually, in the natural world, when the available resources grow, population grows too, so after a brief period of abundance, most members of the species revert to subsistence level. Since the industrial revolution, the resources available to humanity have been growing much faster than population, and so, Hanson suggests, we should think of ourselves as living through one of these unusual periods of abundance. In such periods, competitive pressures are eased, and we can sustain all sorts of belief-behaviour packages that do not maximise our rate of reproduction.

How's this for a prediction:

Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.

In the future, we should expect population and wealth growth rates to converge again, and for most people to once again live at subsistence levels—unless we co-ordinate to constrain the reproduction rate.

Hanson predicts that our descendents will eventually explictly endorse maximising their reproduction rate as a value. The basic thought:

Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). [...] with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants.

During a conversation with Agnes Callard, Hanson says:

I do tend to think natural selection, or selection, will just be a continuing force for a long time. And the main alternative is governance. I actually think one of the main choices that we will have, and the future will have, is the choice between allowing competition or replacing it with governance.

The question of where and how to push back against the logic of competition and selection will be central over the long-term. Co-ordinating humans on earth to make such decisions will be hard, but perhaps not impossible. Co-ordinating with other civilisations from other galaxies... seems very hard, but perhaps not impossible.

On this perspective, there's a direct tradeoff between sustaining non-adaptive values we care about over the short-term and sustaining our existence over the long-term. Invest too much in non-adaptive values, and the long-term viability of your group will be threatened by another group that is less invested in these values.

Attitudes towards this evolutionary perspective vary. Hanson does not consider this model of things "that dark", though he concedes it is not "as bright as the unicorns and fairies that fill dream-time visions". In fact, Hanson is concerned about a future where our descendents restrain competitive dynamics too much:

When I try to do future analysis one of the biggest contrary assumptions or scenarios that I focus on is: what if we end up creating a strong world government that strongly regulates investments, reproduction and other sorts of things, and thereby prevents the evolutionary environment in which the evolutionary analysis applies. And I’m very concerned about that scenario. That is my best judgement of our biggest long term risk […] the creation of a strong civilisation-wide government that is going to be wary of competition and wary of allowing independent choices and probably wary of allowing interstellar colonisation. That is, this vast expansion into the universe could well be prevented by that.

Eliezer Yudkowksy and Carl Schulman, by contrast, are more keen on the idea of replacing competition with governance, and I think Nick Bostrom is too.

Reading Hanson, and reading more about Pragmatism, has left me with a greater awareness that, over the long run, value systems need to promote survival, reproduction and resource accumulation, or else reliably prevent competition in these domains. If they don't, they will be replaced with those that do.

If you're in the business of reflecting on how we should change our values, you probably want to bear this in mind.

One option is to relax the longevity demand. We could settle for pursuing things we care about over the short run, and accept that in the long run, things will come to seem weird and bad by our current lights (but hopefully fine to our (very different) descendents).

Robin Hanson on his methods

My usual first tool of analysis is competition and selection.

To predict what rich creatures do, you need to know what they want. To predict what poor creatures do, you just need to know what they need to do to survive.

Looking back through history it is clear that humanity has not been driving the train. There has been this train of progress or change and it has been a big fast train, especially lately, and it is making enormous changes all through the world but it is not what we would choose if we sat down and discussed it or voted. We just don't have a process for doing that.

Whatever processes that changed things in the past will continue. So I can use those processes to predict what will happen. I am assuming we will continue to have a world with many actions being taken for local reasons as they previously were. But that's a way to challenge my Age of Em hypothesis: you can say no, we will between now and then acquire an ability to foresee the consequences of such changes and to talk together and to vote together on do we want it, and we will have the ability to implement such choices and that will be a change in the future that will prevent the Age of Em.

https://notunreasonable.com/2022/03/21/robin-hanson-on-distant-futures-and-aliens%ef%bf%bc/