Thoughts on Robin Hanson and David Deutsch on predicting the future

David Deutsch has an influential book that contains statements like the following:

The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created.

Unfortunately, the closest he comes to explaining this claim is:

The ability of scientific theories to predict the future depends on the reach of their explanations, but no explanation has enough reach to predict the content of its own successors — or their effects, or those of other ideas that have not yet been thought of. Just as no one in 1900 could have foreseen the consequences of innovations made during the twentieth century — including whole new fields such as nuclear physics, computer science and biotechnology — so our own future will be shaped by knowledge that we do not yet have.

In the same book Deutsch writes:

The philosopher Roger Bacon (1214–94) […] foresaw the invention of microscopes, telescopes, self-powered vehicles and flying machines — and that mathematics would be a key to future scientific discoveries.

Later, he predicts:

Illness and old age are going to be cured soon — certainly within the next few lifetimes — and technology will also be able to prevent deaths through homicide or accidents by creating backups of the states of brains, which could be uploaded into new, blank brains in identical bodies if a person should die. Once that technology exists, people will consider it considerably more foolish not to make frequent backups of themselves than they do today in regard to their computers. If nothing else, evolution alone will ensure that, because those who do not back themselves up will gradually die out. So there can be only one outcome: effective immortality for the whole human population, with the present generation being one of the last that will have short lives.

So—it seems like we should take this unknowability claim with a big pinch of salt. A more plausible slogan would be:

The future of civilization is very hard to know, partly because the knowledge that is going to affect it has yet to be created.

Anyway, with this context I expected that Deutsch would be quite critical of Robin Hanson’s approach to prediction. In fact, during their recent hour-long discussion, Deutsch did not identify any particular cases where he thought Hanson’s approach was widlly off. After discussing several cases (including the cost of solar power; demographic projections; the grabby aliens model), he said:

DD: You haven’t yet given an example of something where I would disagree with you that it’s worth investigating.

And then, towards the end of the discussion:

DD: Everything you are doing is legitimate and indeed morally required, and it’s a bit of a scandal that more people aren’t doing it. But I think the same is true of all fundamental theories, branches of knowledge.

In closing, Deutsch summarised his position:

DD: In short, the place where [probability] is dangerous is where the thing you are predicting depends on the future growth of knowledge. You’ve given examples where it still works even then, e.g. you’ve mentioned the idea where stock prices will be a random walk [even when knowledge accumulates].

He continues:

DD: But there are cases where it’s very misleading. So for example, [if one says] that all long-lived civilisations in the past have failed, and therefore ours will—that’s illegitimate. Because it’s making an assumption that the frequency is the probability. And here I have a substantive theory that says why it isn’t. Namely that our civilisation is different from all the others.

But then:

But it doesn’t matter—even if I didn’t know that theory, I would still say it was illegitimate to extrapolate the future of our civilisation based on past civilisations. Because all of them depended on the future growth of knowledge. And if you look in detail about how they failed, they all failed in different ways, but one thing you can say about it is that in all cases, more knowledge would have saved them.

He says he would say its illegitimate to extrapolate even if he didn’t have a theory as to why—but does not explain why that’s the case, instead just restates the theory he actually has.

Notably, his central principle of optimism” (“all evils are caused by lack of knowledge) is a claim about the future of our civilisation based on… extrapolation from past civilisations.

My suspicion is that Deutsch doesn’t have a crisp way to distinguish (legitimate) prediction from (illegitimate) prophecy, and that often he just labels as prophecy” the predictions that he does not want to seriously engage. At least, this is what I think I’ve repeatedly seen going on during discussions of existential risk that threaten his principle of optimism.

At the level of theory, Deutsch concedes:

DD: I admit that I think the connection between risk and what you might call probability, the reason why risks can be approximated by probabilities, and also the reason why frequencies, in certain situations, can be approximated by probabilities, is an unsolved problem. And I think it’s a very important problem and if I wasn’t working on other things I would be working on that.

Deutsch is right to emphasise is that if you’re going to extrapolate trends or make reference class comparisons, it’s worth trying to articulate the grounds on which you’ve selected your reference classes, and reasons the trend might not continue. In his language: predictions are always underwritten by explanations, implicit or otherwise, and if you make them explicit you may be able to improve on them. That said, it’s surprising how often a prior in favour of simple extrapolations can peform better than more complicated models (see e.g. COVID-19, spring 2020).

Deutsch is also right to emphasise that probability estimates can easily be thrown wildly off due to unknown unknowns or other kinds of model error. This is not controversial, but it is often forgotten, and this mistake can be expensive.

DD: I think it’s reasonable if your best explanations imply that.

DD: You mean not only that the graphs are the same, you mean that you expect solar power technology to be dependent on making factories which use materials of a certain kind which aren’t going to be throttled by a hostile foreign power and so on.

DD: It is simply wrong to use probability in this way and it would be better to make the assumptions explicit.

I note that Deutsch does not offer an argument for the it is simply wrong to use probability in this way” claim, in much the way that Mervyn King and John Kay fail to in their book. In particular, he does not discuss or argue against the Bayesian notion of assigning subjective probabilities to beliefs.

In a talk titled Knowledge Creation and It’s Risks, Deutsch says:

Outcomes can’t be analysed in terms of probability unless we have specific explanatory models that predict that something is or can be approximated as a random process, and predicts the probabilities. Otherwise one is fooling oneself, picking arbitrary numbers as probabilities and arbitrary numbers as utilities and then claiming authority for the result by misdirection, away from the baseless assumptions.

For example, when we were building the Hadron collider, should we not switch it on just in case it destroys the universe? Well either the theory that it will destroy the universe is true, or the theory that it’s safe is true. The theories don’t have probabilities. The real probability is zero or one, it’s just unknown. And the issue must be decided by explanation, not game theory. And the explanation that it was more dangerous to use the collider than to scrap it, and forgo the resulting knowledge, was a bad explanation, because it could be applied to any fundamental research.

He’s right that—objectively—either the collidor will destroy the universe or it won’t. But it seems fine for the Bayesian to say: in cases like this, with evidence like this, I expect my beliefs about the objective world to be correct X% of the time.

Towards the end of the dialog, Deutsch accidentally states a prediction in rather Bayesian-sounding terms:

DD: Conditional on our species not surviving this century, I think it is overwhelmingly likely that the reason is one we have not thought of yet.

He notices the mistake, and corrects himself:

DD: As a side remark, you’ve caught me in an illegitimate use of probability. When I said that conditional on our species being destroyed it’s overwhelmingly likely to be [due to a reason we have not thought of]… I shouldn’t have said that. This just shows how deeply this mistaken notion of ideas having probability has permeated our culture. Even though I hate it, I can’t help using it.

At the end of the dialog, I was still left waiting for Deutsch’s explanation for why assigning subjective probabilities is not a sensible thing to do1. Perhaps that will not be forthcoming. I would gladly settle, instead, for his object-level discussion of the Vulnerable World Hypothesis 2.


Edit (2021-12-29): Tyler Cowen comments on his interview with David Deutsch:

Deutsch convinced me there’s often a lot of hot air behind Popper. He didn’t argue very well on Popper’s behalf. Deutsch is way smarter than I am but he seemed to me in some fundamental ways a dogmatist, and not really able to defend Popper very well. He’s made up his mind and you get a particular kind of emphatic statement, but I thought that at the philosophical level his defences were weak. […] There’s this odd feature of Popperianism. It somehow attracts a lot of dogmatists. I don’t know why.



  1. Edit (2021-12-27): since writing this, Joesph Walker pointed me to a 2016 paper by Deutsch, which contains an extended discussion of the Popperian conception of scientific explanation which Deutsch favours, and contrasts it to Bayesian conception. I’ve not yet had a chance to give it a proper read. I will update this footnote/post when I do.↩︎

  2. For what it’s worth: my current take is that Deutsch is right to worry about widespread pessimism about technology, but his reaction is too sweeping. I worry that the slogan he presents as the principle of optimism is used by technologists to wave away legitimate concerns about catastrophic and existential risk. In many areas (e.g. nuclear power)—and probably on average—I guess Deutsch is right that we need more techno-optimism. But in some areas, I suspect we need discouragement and regulation, with a view to pulling off some strategy of differential technological development.↩︎

writing robin hanson david deutsch futurism bayesianism radical uncertainty

‘St Paul and All That’, Frank O’Hara

poetry frank o hara

Report on human augmentation from the UK Ministry of Defence (May 2021)

Our potential adversaries will not be governed by the same ethical and legal considerations that we are, and they are already developing human augmentation capabilities. Our key challenge will be establishing advantage in this field without compromising the values and freedoms that underpin our way of life.

[…]

We cannot wait for the ethics of human augmentation to be decided for us, we must be part of the conversation now. The ethical implications are significant but not insurmountable; early and regular engagement will be essential to remain at the forefront of this field. Ethical perspectives on human augmentation will change and this could happen quickly. There may be a moral obligation to augment people, particularly in cases where it promotes well-being or protects us from novel threats.

[…]

The need to use human augmentation may ultimately be dictated by national interest. Countries may need to develop and use human augmentation or risk surrendering influence, prosperity and security to those who will. National regulations dictating the pace and scope of scientific research reflect societal views, particularly in democracies that are more sensitive to public opinion. The future of human augmentation should not, however, be decided by ethicists or public opinion, although both will be important voices; rather, governments will need to develop a clear policy position that maximises the use of human augmentation in support of prosperity, safety and security, without undermining our values.

[…]

Governance in Western liberal societies and international institutions is already unable to keep pace with technological change and adoption of human augmentation will exacerbate this trend. National and international governance will be challenged by the myriad of implications of adopting human augmentation technologies.

[…]

Cultural and ethical considerations will inform the extent to which opportunities are seized, but human augmentation threats will be forced upon us irrespective of our own normative standpoint. We must understand and address such threats or otherwise risk creating a strategic vulnerability.

[…]

Human augmentation will play a key role in reducing the risk of cognitive overload as warfare becomes faster, more complex and more congested. Bioinformatics are likely to play a key role in identifying commanders and staff with the right cognitive and adaptive potential for command and control roles. Brain interfaces linked to machine learning algorithms have the potential to rapidly accelerate the speed and quality of decision-making.

[…]

The notion of moral enhancement may require using human augmentation in the future. Our moral psychologies evolved when our actions only affected our immediate environment, but recent advances in technology mean that actions can have almost immediate global consequences. Our moral tendencies to look after our kin and immediate future may no longer be fit for the modern, interconnected world.

[…]

Ethics will be a critical aspect when considering whether to adopt human augmentation, but national interest will also inform, and may even fundamentally reshape, the moral calculation. There is likely to be a fine balance between upholding the ethics that underpin our way of life and avoiding ceding an unassailable national advantage to our adversaries.

[…]

According to the transhumanistic thinking model, the human is an incomplete creature that can be shaped in the desired direction by making responsible use of science, technology and other rational means.

https://www.gov.uk/government/publications/human-augmentation-the-dawn-of-a-new-paradigm

quote transhumanism futurism evolution game theory

Eliezer Yudkowsky: Darwin discovered God

In a way, Darwin discovered God—a God that failed to match the preconceptions of theology, and so passed unheralded. If Darwin had discovered that life was created by an intelligent agent—a bodiless mind that loves us, and will smite us with lightning if we dare say otherwise—people would have said My gosh! That’s God!”

But instead Darwin discovered a strange alien God—not comfortably ineffable”, but really genuinely different from us. Evolution is not a God, but if it were, it wouldn’t be Jehovah. It would be H. P. Lovecraft’s Azathoth, the blind idiot God burbling chaotically at the center of everything, surrounded by the thin monotonous piping of flutes.

https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god

a hydrothermal vent

quote eliezer yudkowsky evolution religion

Robin Hanson: This is the Dream Time

In the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures.

Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 10^3000, which seems impossible to achieve with only the 10^70 atoms of our galaxy available by then. Yes we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster. So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels. Even so, they will be basically happy in such a world.

[…]

When our distant descendants think about our era, however, differences will loom larger. Yes they will see that we were more like them in knowing more things, and in having less contact with a wild nature. But our brief period of very rapid growth and discovery and our globally integrated economy and culture will be quite foreign to them. Yet even these differences will pale relative to one huge difference: our lives are far more dominated by consequential delusions: wildly false beliefs and nonadaptive values that matter. While our descendants may explore delusion-dominated virtual realities, they will well understand that such things cannot be real, and don’t much influence history. In contrast, we live in the brief but important dreamtime” when delusions drove history. Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialled to the max.

[…]

These factors combine to make our era the most consistently and consequentially deluded and unadaptive of any era ever. When they remember us, our distant descendants will be shake their heads at the demographic transition, where we each took far less than full advantage of the reproductive opportunities our wealth offered. They will note how we instead spent our wealth to buy products we saw in ads that talked mostly about the sort of folks who buy them. They will lament our obsession with super-stimili that highjacked our evolved heuristics to give us taste without nutrition. They will note we spent vast sums on things that didn’t actually help on the margin, such as on medicine that didn’t make us healthier, or education that didn’t make us more productive.

[…]

Perhaps most important, our descendants may remember how history hung by a precarious thread on a few crucial coordination choices that our highly integrated rapidly changing world did or might have allowed us to achieve, and the strange delusions that influenced such choices. These choices might have been about global warming, rampaging robots, nuclear weapons, bioterror, etc. Our delusions may have led us to do something quite wonderful, or quite horrible, that permanently changed the options available to our descendants. This would be the most lasting legacy of this, our explosively growing dream time, when what was once adaptive behavior with mostly harmless delusions become strange and dreamy unadaptive behavior, before adaptation again reasserted a clear-headed relation between behavior and reality.

Our dreamtime will be a time of legend, a favorite setting for grand fiction, when low-delusion heroes and the strange rich clowns around them could most plausibly have changed the course of history. Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.

https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html

quote robin hanson futurism evolution malthusianism

Comments on “This is the Dream Time”

Eliezer Yudkowsky12 years ago

Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.

Yeah. I guess I don’t ultimately understand the psychology that can write that and not fight fanatically to the last breath to prevent the dark vision from coming to pass.

How awful would things have to be before you would fight to stop it? Before you would do more than sigh in resignation? If no one were ever happy or sad, if no one ever again told a story or bothered to imagine that things could have been different, would that be awful enough?

Are the people who try and change the future, people who you are not comfortable affiliating yourself with? Is it not the role” that you play in your vision of your life? Or is it really that the will to protect is so rare in a human being?


Robin Hanson12 years ago

This vision really isn’t that dark for me. It may not be as bright as the unicorns and fairies that fill dream-time visions, but within the range of what seems actually feasible, I’d call it at least 90% of the way from immediate extinction to the very best possible.


Carl Shulman12 years ago

I see a worrying pattern here. Robin thinks the hyper-Malthusian scenario is amazingly great and that efforts to globally coordinate to prevent it (and the huge deadweight losses of burning the commons, as well as vast lost opportunities for existing beings) will very probably fail. Others, such as James Hughes and Eliezer and myself, see the Malthusian competitive scenario as disastrous and also think that humans or posthumans will invest extensive efforts (including the social control tech enabled by AI/brain emulations) to avoid the associated losses in favor of a cooperative/singleton scenario, with highish likelihood of success.

It almost seems as though we are modeling the motives of future beings with the option of working to produce global coordination simply by generalizing from our own valuations of the Malthusian scenario.

quote robin hanson futurism evolution malthusianism