Digital minds: descendents or rivals?
Should we think of digital minds as our descendents, or our rivals?
If they are descendents, we can think of them as children—different from us in important ways, but carrying on the flame.
If they are rivals, well—they are rivals. If they inherit the future, then we have, in some important sense, lost.
If you think that digital minds will inherit the future whether we like it or not, the main way in which this matters is how it affects your attitudes toward this future today. And so perhaps we should make more effort to like it.
Iain McGilchrist on the left and right hemispheres
The left hemisphere is good at helping us manipulate the world, but not good at helping us to understand it. To just use this bit and then that bit, and then that bit. But the right hemisphere has a kind of sustained, broad, vigilant attention instead of this narrow, focused, piecemeal attention. And it sustains a sense of being, a continuous being, in the world. So, these are very different kinds of attention. And they bring into being for us quite different kinds of a world.
It is not so much what each hemisphere does, it’s the way in which it does it. By which, I don’t mean by what mechanism. I mean, the manner in which it does it. The two halves of the brain have, as we do, different goals, different values, different preferences, different ways of being.
In a nutshell: the left hemisphere has a map of the world, and the right hemisphere sees the terrain that is mapped. So, the right hemisphere is seeing an immensely complex, very hard-to-summarise, nonlinear, deeply embedded, changing, flowing, ramifying world. And in the other—the left-hemisphere take on the world—things are clear, sharp, distinct, dead, decontextualised, abstract, disembodied. And then they have to be put together, as you would put things together like building a machine in the garage.
https://www.econtalk.org/iain-mcgilchrist-on-the-divided-brain-and-the-master-and-his-emissary/
Pragmatism, evolution & moral philosophy
Those who grow passionate on one or the other side of arcane and seemingly pointless disputes are struggling with the question of what self-image it would be best for human beings to have. —Richard Rorty
Pragmatism casts philosophy in a different light. It sees philosophy—including moral philosophy—as just another thing that humans do to get by.
That’s to say: it situates philosophy within the glorious indifference of the natural world, the competition and selection, the evolving universe, the dust clouds, the equations of physics.
In general and on average over the long run 1, we see things as good or bad insofar as it helps us get by. And “getting by” ultimately means: survive and reproduce. In the long run, your belief-value bundles have to be adaptive.
On this view, moral philosophy is not—as the non-naturalist moral realist would have it—a quest for values that are “correct” independently of the evolutionary environment 2. Strange as it sounds, evolutionary processes are the source of normativity. Philosophers who hope to improve our values need to think about the messy, hard realities of adaptiveness, equilibria and economics at least as much as they think about moral principles we can all (currently) agree on. The former underwrite the latter 3.
Maladaptive belief-value bundles can emerge and persist for short periods, of course 4 5. I once heard the story of the Scottish Highlanders, whose inflexibility of custom kept them in relative poverty while the modernising Southerners became rich. Eventually, economic incentives for the Southerners led to the Highland Clearances, the forceful destruction of the Highlander way of life.
As the environment changes, our values change too, whether we like it or not.
I’m not sure how best to caveat this.↩︎
At this point, the non-naturalist says: “surely pain is always instrinsically bad”, or “pain is bad because of how it feels”. And the naturalist-pragmatist replies: sure, perhaps its adaptive to think this way. But it’s not quite the right way to think about things.↩︎
It always seems like the non-naturalists don’t linger enough on the question of why they find certain truths about value to be self-evident.↩︎
It’s almost a tautology to say that maladaptive behaviours don’t last. The key insight is that the reason they don’t last is competition from more adaptive behaviours. If you can prevent competition, you can get away with all sorts of not-maximally-adaptive behaviour. But the more not-maximally-adaptive behaviour you preserve, the more you risk your ability to prevent competition.↩︎
I’m not sure how close we should expect cultures get to “optimal” adaptiveness, even over the very long run. I guess they usually just approximate local maxima.↩︎
writing pragmatism moral philosophy evolution robin hanson richard rorty value drift
Tyler Cowen on axiology: I see the good as more holistic than additive-aggregative
These days, I see the good as more holistic than additive-aggregative. […] We can make some gross comparisons of better and worse at the macro level, with partial rankings at best, but for many individualized normative comparisons there simply isn’t a right answer. I view “ranking” as a luxury, occasionally available, rather than an axiomatic postulate which can be used to generate normative comparisons, and thus normative paradoxes, at will. I see that response as different than allowing or embracing intransitivity across multiple alternatives and in that regard my final position differs from Temkin’s. Furthermore, in a holistic approach, the “pure micro welfare numbers” used to generate the paradoxical comparisons aren’t necessarily there in the first place but rather they have to be derived from our intuitions about the whole.
https://marginalrevolution.com/marginalrevolution/2012/01/rethinking-the-good-by-larry-temkin.html
Reading Robin Hanson
Tyler Cowen characterises Robin Hanson and Thomas Malthus as “thinkers of constraints”. I read them both while wondering: what constraints apply to moral philosophy?
In This is the Dream Time, Robin Hanson claims that we are living through an unusual period of abundance. Usually, in the natural world, when the available resources grow, population grows too, so after a brief period of abundance, most members of the species revert to subsistence level. Since the industrial revolution, the resources available to humanity have been growing much faster than population, and so, Hanson suggests, we should think of ourselves as living through one of these unusual periods of abundance. In such periods, competitive pressures are eased, and we can sustain all sorts of belief-behaviour packages that do not maximise our rate of reproduction.
How’s this for a prediction:
Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialed to the max.
In the future, we should expect population and wealth growth rates to converge again, and for most people to once again live at subsistence levels—unless we co-ordinate to constrain the reproduction rate.
Hanson predicts that our descendents will eventually explictly endorse maximising their reproduction rate as a value. The basic thought:
Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). […] with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants.
During a conversation with Agnes Callard, Hanson says:
I do tend to think natural selection, or selection, will just be a continuing force for a long time. And the main alternative is governance. I actually think one of the main choices that we will have, and the future will have, is the choice between allowing competition or replacing it with governance.
The question of where and how to push back against the logic of competition and selection will be central over the long-term. Co-ordinating humans on earth to make such decisions will be hard, but perhaps not impossible. Co-ordinating with other civilisations from other galaxies… seems very hard, but perhaps not impossible.
On this perspective, there’s a direct tradeoff between sustaining non-adaptive values we care about over the short-term and sustaining our existence over the long-term. Invest too much in non-adaptive values, and the long-term viability of your group will be threatened by another group that is less invested in these values.
Attitudes towards this evolutionary perspective vary. Hanson does not consider this model of things “that dark”, though he concedes it is not “as bright as the unicorns and fairies that fill dream-time visions”. In fact, Hanson is concerned about a future where our descendents restrain competitive dynamics too much:
When I try to do future analysis one of the biggest contrary assumptions or scenarios that I focus on is: what if we end up creating a strong world government that strongly regulates investments, reproduction and other sorts of things, and thereby prevents the evolutionary environment in which the evolutionary analysis applies. And I’m very concerned about that scenario. That is my best judgement of our biggest long term risk […] the creation of a strong civilisation-wide government that is going to be wary of competition and wary of allowing independent choices and probably wary of allowing interstellar colonisation. That is, this vast expansion into the universe could well be prevented by that.
Eliezer Yudkowksy and Carl Schulman, by contrast, are more keen on the idea of replacing competition with governance, and I think Nick Bostrom is too.
Reading Hanson, and reading more about Pragmatism, has left me with a greater awareness that, over the long run, value systems need to promote survival, reproduction and resource accumulation, or else reliably prevent competition in these domains. If they don’t, they will be replaced with those that do.
If you’re in the business of reflecting on how we should change our values, you probably want to bear this in mind.
One option is to relax the longevity demand. We could settle for pursuing things we care about over the short run, and accept that in the long run, things will come to seem weird and bad by our current lights (but hopefully fine to our (very different) descendents).
writing robin hanson evolution futurism moral philosophy pragmatism parenting
Robin Hanson on his methods
My usual first tool of analysis is competition and selection.
To predict what rich creatures do, you need to know what they want. To predict what poor creatures do, you just need to know what they need to do to survive.
Looking back through history it is clear that humanity has not been driving the train. There has been this train of progress or change and it has been a big fast train, especially lately, and it is making enormous changes all through the world but it is not what we would choose if we sat down and discussed it or voted. We just don’t have a process for doing that.
Whatever processes that changed things in the past will continue. So I can use those processes to predict what will happen. I am assuming we will continue to have a world with many actions being taken for local reasons as they previously were. But that’s a way to challenge my Age of Em hypothesis: you can say no, we will between now and then acquire an ability to foresee the consequences of such changes and to talk together and to vote together on do we want it, and we will have the ability to implement such choices and that will be a change in the future that will prevent the Age of Em.
https://notunreasonable.com/2022/03/21/robin-hanson-on-distant-futures-and-aliens%ef%bf%bc/