Elijah Millgram's pragmatic critique of internalism

That motivations fail to agglomerate is exhibited in the most striking logical feature of internalism (and of its cruder relative, instrumentalism), namely, that one’s bottom-line desires and projects are incorrigible.

[...]

You want what you want, and someone who insists that you are wrong to do so, when mistakes about such things as how to get what you want are not at issue, is just bluffing.

Subjective motivations can change, in all manner of ways, but they cannot be corrected, and this means that nothing could count as the rational investigation, on the part of such a creature, as to whether its bottom-line guidelines and priorities were correct. Since the creatures do not correct their own motivations, the design strategy is reasonable only if they do not need to; in other words, only if, for the most part, the designer can equip them with motivations (or ensure that they pick up motivations from their surroundings) that will not need correction. That in turn is feasible only if the designer can anticipate the practical problems his creatures will face, and only if the guidelines his creature would need to negotiate them are sufficiently compact to be stored and accessed. Given plausible cognitive constraints on processing, memory, and so on, that in turn requires that the environment the creature is anticipated to face be both stable and simple.

Coloring in the line drawing, we see that [Bernard] Williams’s alethic state of nature is something on the order of a tourist-brochure version of a village in the hills of Provence, where life goes on as it has since time immemorial. The villagers work their plots of land, growing the same grains and vegetables they always have; they herd their sheep and goats; they bake rustic bread and knit rustic clothes; they hunt rabbits and deer; they build houses out of the local stone; they marry and raise children; when they get old, they sit outside the village pub and drink pastis; they play boules in the park; eventually, they die, and are buried in the cemetery behind the church. The internalist design solution is satisfactory for this form of life. The designer knows that his peasants will have to work the fields, so when it comes time to own a field and work it, they come to have a desire to do so. They need to be made to reproduce, and thus are built so that, when they get old enough, they will want to have children, or anyway want to do things that as a predictable side effect produce children. Not all of a subjective motivational set need be hardwired, of course; a disposition to mimic others, and to learn and adopt one’s elders’ thick ethical concepts, will keep the games of boules going and the pastis flowing. Because life in the mythical village never changes, there is no need to delegate to the peasants themselves the task of investigating what their motivations ought to be, and no need to equip them to correct their motivations; thus, there is no need to complicate their cognitive or normative systems with the gadgetry that would take.

[...]

Analytic philosophy has done something that is quite peculiar: instead of making sense of humanity, we have been philosophizing for the inhabitants of a romantic fantasy of traditional peasant life.

[...]

Instrumentalist (or “Humean”) theories of practical reasoning are how philosophers talk through the strategy of hardwiring designated objectives into an organism, so that it can execute a life plan suitable to a stable environment.  Your environment is no longer stable enough for relying on desires to be a decent strategy.  Instrumentalists (“Humeans”) have a view of practical rationality suitable for a cruder, simpler species.

The Great Endarkenment, 'D’où venons-nous . . . Que sommes nous . . . Où allons-nous?'

John Richardson on Nietzsche's naturalism

Nietzsche thinks he cares more about truth than other philosophers do. This is partly because he is not in thrall to a moral bias, but also because he understands better the kind of truth there really can be—the kind humans can and do have. So he rewrites philosophers’ previous idea of truth while still giving it preeminent value.

[...]

In announcing these truths he contributes to what he thinks is a prolonged, ineluctable process by which our modern scientific will to truth finally faces the truth about values—the last and hardest topic for it to face. As these truths are exposed, our culture, and the rest of the world through it, is confronted with a great spiritual crisis and challenge: How can and will we go on to value once we have uncovered these truths about our valuing? How can we value, now for the first time, honestly (i.e., while facing the truth about what we’re doing)?

[...]

It is extremely difficult to do so because this truth tends to undermine our [values] [...] insofar as they involve a framing claim that these things (that are valued) are really, independently good. For the truth, Nietzsche holds, is that all values are dependent on valuings—are “perspectival.”

[...]

Recognizing Nietzsche’s idea of values as signs is the key to much of his thought about them. Seeing a value as a sign, we see why he insists that it’s not only humans that value. Animals are clearly responsive to signs in their perceptual discernments. So a predator may employ a certain smell as a sign of prey. And we can see ways that plants are responsive to signs as well. Nietzsche holds that willing (or aiming) is something that all organisms do. It depends not at all on consciousness.

[...]

Our human values, as worded, are distinctive in being held in common, as norms. They are accepted because this is “how one values” in the community to which one belongs. They thus serve a “herding” function, which strengthens the group but at the expense of members’ individuality.

John Richardson, Nietzsche's Values, Preface

Thoughts on Robin Hanson and David Deutsch on predicting the future

David Deutsch has an influential book that contains statements like the following:

The future of civilization is unknowable, because the knowledge that is going to affect it has yet to be created.

Unfortunately, the closest he comes to explaining this claim is:

The ability of scientific theories to predict the future depends on the reach of their explanations, but no explanation has enough reach to predict the content of its own successors – or their effects, or those of other ideas that have not yet been thought of. Just as no one in 1900 could have foreseen the consequences of innovations made during the twentieth century – including whole new fields such as nuclear physics, computer science and biotechnology – so our own future will be shaped by knowledge that we do not yet have.

In the same book Deutsch writes:

The philosopher Roger Bacon (1214–94) [...] foresaw the invention of microscopes, telescopes, self-powered vehicles and flying machines – and that mathematics would be a key to future scientific discoveries.

Later, he predicts:

Illness and old age are going to be cured soon – certainly within the next few lifetimes – and technology will also be able to prevent deaths through homicide or accidents by creating backups of the states of brains, which could be uploaded into new, blank brains in identical bodies if a person should die. Once that technology exists, people will consider it considerably more foolish not to make frequent backups of themselves than they do today in regard to their computers. If nothing else, evolution alone will ensure that, because those who do not back themselves up will gradually die out. So there can be only one outcome: effective immortality for the whole human population, with the present generation being one of the last that will have short lives.

So—it seems like we should take this unknowability claim with a big pinch of salt. A more plausible slogan would be:

The future of civilization is very hard to know, partly because the knowledge that is going to affect it has yet to be created.

Anyway, with this context I expected that Deutsch would be quite critical of Robin Hanson's approach to prediction. In fact, during their recent hour-long discussion, Deutsch did not identify any particular cases where he thought Hanson's approach was widlly off. After discussing several cases (including the cost of solar power; demographic projections; the grabby aliens model), he said:

DD: You haven't yet given an example of something where I would disagree with you that it's worth investigating.

And then, towards the end of the discussion:

DD: Everything you are doing is legitimate and indeed morally required, and it's a bit of a scandal that more people aren't doing it. But I think the same is true of all fundamental theories, branches of knowledge.

In closing, Deutsch summarised his position:

DD: In short, the place where [probability] is dangerous is where the thing you are predicting depends on the future growth of knowledge. You've given examples where it still works even then, e.g. you've mentioned the idea where stock prices will be a random walk [even when knowledge accumulates].

He continues:

DD: But there are cases where it's very misleading. So for example, [if one says] that all long-lived civilisations in the past have failed, and therefore ours will—that's illegitimate. Because it's making an assumption that the frequency is the probability. And here I have a substantive theory that says why it isn't. Namely that our civilisation is different from all the others.

But then:

But it doesn't matter—even if I didn't know that theory, I would still say it was illegitimate to extrapolate the future of our civilisation based on past civilisations. Because all of them depended on the future growth of knowledge. And if you look in detail about how they failed, they all failed in different ways, but one thing you can say about it is that in all cases, more knowledge would have saved them.

He says he would say its illegitimate to extrapolate even if he didn't have a theory as to why—but does not explain why that's the case, instead just restates the theory he actually has.

Notably, his central "principle of optimism" ("all evils are caused by lack of knowledge) is a claim about the future of our civilisation based on... extrapolation from past civilisations.

My suspicion is that Deutsch doesn't have a crisp way to distinguish (legitimate) prediction from (illegitimate) prophecy, and that often he just labels as "prophecy" the predictions that he does not want to seriously engage. At least, this is what I think I've repeatedly seen going on during discussions of existential risk that threaten his principle of optimism.

At the level of theory, Deutsch concedes:

DD: I admit that I think the connection between risk and what you might call probability, the reason why risks can be approximated by probabilities, and also the reason why frequencies, in certain situations, can be approximated by probabilities, is an unsolved problem. And I think it's a very important problem and if I wasn't working on other things I would be working on that.

Deutsch is right to emphasise is that if you're going to extrapolate trends or make reference class comparisons, it's worth trying to articulate the grounds on which you've selected your reference classes, and reasons the trend might not continue. In his language: predictions are always underwritten by explanations, implicit or otherwise, and if you make them explicit you may be able to improve on them. That said, it's surprising how often a prior in favour of simple extrapolations can peform better than more complicated models (see e.g. COVID-19, spring 2020).

Deutsch is also right to emphasise that probability estimates can easily be thrown wildly off due to unknown unknowns or other kinds of model error. This is not controversial, but it is often forgotten, and this mistake can be expensive.

{{RH: Why can't I look at trend lines for costs of other techologies and predict that solar power costs will continue to fall? Sure I can't be certain they will but I can do better than chance, right?}}

DD: I think it's reasonable if your best explanations imply that.

DD: {{When you're saying that solar power is similar to other technologies, you're making assumptions about a very complex and detailed thing.}} You mean not only that the graphs are the same, you mean that you expect solar power technology to be dependent on making factories which use materials of a certain kind which aren't going to be throttled by a hostile foreign power and so on.

DD: It is simply wrong to use probability in this way and it would be better to make the assumptions explicit.

I note that Deutsch does not offer an argument for the "it is simply wrong to use probability in this way" claim, in much the way that Mervyn King and John Kay fail to in their book. In particular, he does not discuss or argue against the Bayesian notion of assigning subjective probabilities to beliefs.

In a talk titled Knowledge Creation and It's Risks, Deutsch says:

Outcomes can't be analysed in terms of probability unless we have specific explanatory models that predict that something is or can be approximated as a random process, and predicts the probabilities. Otherwise one is fooling oneself, picking arbitrary numbers as probabilities and arbitrary numbers as utilities and then claiming authority for the result by misdirection, away from the baseless assumptions.

For example, when we were building the Hadron collider, should we not switch it on just in case it destroys the universe? Well either the theory that it will destroy the universe is true, or the theory that it's safe is true. The theories don't have probabilities. The real probability is zero or one, it's just unknown. And the issue must be decided by explanation, not game theory. And the explanation that it was more dangerous to use the collider than to scrap it, and forgo the resulting knowledge, was a bad explanation, because it could be applied to any fundamental research.

He's right that—objectively—either the collidor will destroy the universe or it won't. But it seems fine for the Bayesian to say: in cases like this, with evidence like this, I expect my beliefs about the objective world to be correct X% of the time.

Towards the end of the dialog, Deutsch accidentally states a prediction in rather Bayesian-sounding terms:

DD: Conditional on our species not surviving this century, I think it is overwhelmingly likely that the reason is one we have not thought of yet.

He notices the mistake, and corrects himself:

DD: As a side remark, you've caught me in an illegitimate use of probability. When I said that conditional on our species being destroyed it's overwhelmingly likely to be [due to a reason we have not thought of]... I shouldn't have said that. This just shows how deeply this mistaken notion of ideas having probability has permeated our culture. Even though I hate it, I can't help using it.

At the end of the dialog, I was still left waiting for Deutsch's explanation for why assigning subjective probabilities is not a sensible thing to do[^1]. Perhaps that will not be forthcoming. I would gladly settle, instead, for his object-level discussion of the Vulnerable World Hypothesis [^2].


Edit (2021-12-29): Tyler Cowen comments on his interview with David Deutsch:

Deutsch convinced me there's often a lot of hot air behind Popper. He didn't argue very well on Popper's behalf. Deutsch is way smarter than I am but he seemed to me in some fundamental ways a dogmatist, and not really able to defend Popper very well. He's made up his mind and you get a particular kind of emphatic statement, but I thought that at the philosophical level his defences were weak. [...] There's this odd feature of Popperianism. It somehow attracts a lot of dogmatists. I don't know why.


[^1]: Edit (2021-12-27): since writing this, Joesph Walker pointed me to a 2016 paper by Deutsch, which contains an extended discussion of the Popperian conception of scientific explanation which Deutsch favours, and contrasts it to Bayesian conception. I've not yet had a chance to give it a proper read. I will update this footnote/post when I do.

[^2]: For what it's worth: my current take is that Deutsch is right to worry about widespread pessimism about technology, but his reaction is too sweeping. I worry that the slogan he presents as the principle of optimism is used by technologists to wave away legitimate concerns about catastrophic and existential risk. In many areas (e.g. nuclear power)—and probably on average—I guess Deutsch is right that we need more techno-optimism. But in some areas, I suspect we need discouragement and regulation, with a view to pulling off some strategy of differential technological development.

Report on human augmentation from the UK Ministry of Defence (May 2021)

Our potential adversaries will not be governed by the same ethical and legal considerations that we are, and they are already developing human augmentation capabilities. Our key challenge will be establishing advantage in this field without compromising the values and freedoms that underpin our way of life.

[...]

We cannot wait for the ethics of human augmentation to be decided for us, we must be part of the conversation now. The ethical implications are significant but not insurmountable; early and regular engagement will be essential to remain at the forefront of this field. Ethical perspectives on human augmentation will change and this could happen quickly. There may be a moral obligation to augment people, particularly in cases where it promotes well-being or protects us from novel threats.

[...]

The need to use human augmentation may ultimately be dictated by national interest. Countries may need to develop and use human augmentation or risk surrendering influence, prosperity and security to those who will. National regulations dictating the pace and scope of scientific research reflect societal views, particularly in democracies that are more sensitive to public opinion. The future of human augmentation should not, however, be decided by ethicists or public opinion, although both will be important voices; rather, governments will need to develop a clear policy position that maximises the use of human augmentation in support of prosperity, safety and security, without undermining our values.

[...]

Governance in Western liberal societies and international institutions is already unable to keep pace with technological change and adoption of human augmentation will exacerbate this trend. National and international governance will be challenged by the myriad of implications of adopting human augmentation technologies.

[...]

Cultural and ethical considerations will inform the extent to which opportunities are seized, but human augmentation threats will be forced upon us irrespective of our own normative standpoint. We must understand and address such threats or otherwise risk creating a strategic vulnerability.

[...]

Human augmentation will play a key role in reducing the risk of cognitive overload as warfare becomes faster, more complex and more congested. Bioinformatics are likely to play a key role in identifying commanders and staff with the right cognitive and adaptive potential for command and control roles. Brain interfaces linked to machine learning algorithms have the potential to rapidly accelerate the speed and quality of decision-making.

[...]

The notion of moral enhancement may require using human augmentation in the future. Our moral psychologies evolved when our actions only affected our immediate environment, but recent advances in technology mean that actions can have almost immediate global consequences. Our moral tendencies to look after our kin and immediate future may no longer be fit for the modern, interconnected world.

[...]

Ethics will be a critical aspect when considering whether to adopt human augmentation, but national interest will also inform, and may even fundamentally reshape, the moral calculation. There is likely to be a fine balance between upholding the ethics that underpin our way of life and avoiding ceding an unassailable national advantage to our adversaries.

[...]

According to the transhumanistic thinking model, the human is an incomplete creature that can be shaped in the desired direction by making responsible use of science, technology and other rational means.

https://www.gov.uk/government/publications/human-augmentation-the-dawn-of-a-new-paradigm

Eliezer Yudkowsky: Darwin discovered God

In a way, Darwin discovered God—a God that failed to match the preconceptions of theology, and so passed unheralded. If Darwin had discovered that life was created by an intelligent agent—a bodiless mind that loves us, and will smite us with lightning if we dare say otherwise—people would have said "My gosh! That's God!"

But instead Darwin discovered a strange alien God—not comfortably "ineffable", but really genuinely different from us. Evolution is not a God, but if it were, it wouldn't be Jehovah. It would be H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything, surrounded by the thin monotonous piping of flutes.

https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god

a hydrothermal vent

Robin Hanson: This is the Dream Time

In the distant future, our descendants will probably have spread out across space, and redesigned their minds and bodies to explode Cambrian-style into a vast space of possible creatures. If they are free enough to choose where to go and what to become, our distant descendants will fragment into diverse local economies and cultures.

Given a similar freedom of fertility, most of our distant descendants will also live near a subsistence level. Per-capita wealth has only been rising lately because income has grown faster than population. But if income only doubled every century, in a million years that would be a factor of 10^3000, which seems impossible to achieve with only the 10^70 atoms of our galaxy available by then. Yes we have seen a remarkable demographic transition, wherein richer nations have fewer kids, but we already see contrarian subgroups like Hutterites, Hmongs, or Mormons that grow much faster. So unless strong central controls prevent it, over the long run such groups will easily grow faster than the economy, making per person income drop to near subsistence levels. Even so, they will be basically happy in such a world.

[...]

When our distant descendants think about our era, however, differences will loom larger. Yes they will see that we were more like them in knowing more things, and in having less contact with a wild nature. But our brief period of very rapid growth and discovery and our globally integrated economy and culture will be quite foreign to them. Yet even these differences will pale relative to one huge difference: our lives are far more dominated by consequential delusions: wildly false beliefs and nonadaptive values that matter. While our descendants may explore delusion-dominated virtual realities, they will well understand that such things cannot be real, and don’t much influence history. In contrast, we live in the brief but important “dreamtime” when delusions drove history. Our descendants will remember our era as the one where the human capacity to sincerely believe crazy non-adaptive things, and act on those beliefs, was dialled to the max.

[...]

These factors combine to make our era the most consistently and consequentially deluded and unadaptive of any era ever. When they remember us, our distant descendants will be shake their heads at the demographic transition, where we each took far less than full advantage of the reproductive opportunities our wealth offered. They will note how we instead spent our wealth to buy products we saw in ads that talked mostly about the sort of folks who buy them. They will lament our obsession with super-stimili that highjacked our evolved heuristics to give us taste without nutrition. They will note we spent vast sums on things that didn’t actually help on the margin, such as on medicine that didn’t make us healthier, or education that didn’t make us more productive.

[...]

Perhaps most important, our descendants may remember how history hung by a precarious thread on a few crucial coordination choices that our highly integrated rapidly changing world did or might have allowed us to achieve, and the strange delusions that influenced such choices. These choices might have been about global warming, rampaging robots, nuclear weapons, bioterror, etc. Our delusions may have led us to do something quite wonderful, or quite horrible, that permanently changed the options available to our descendants. This would be the most lasting legacy of this, our explosively growing dream time, when what was once adaptive behavior with mostly harmless delusions become strange and dreamy unadaptive behavior, before adaptation again reasserted a clear-headed relation between behavior and reality.

Our dreamtime will be a time of legend, a favorite setting for grand fiction, when low-delusion heroes and the strange rich clowns around them could most plausibly have changed the course of history. Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.

https://www.overcomingbias.com/2009/09/this-is-the-dream-time.html

Comments on "This is the Dream Time"

Eliezer Yudkowsky12 years ago

Perhaps most dramatic will be tragedies about dreamtime advocates who could foresee and were horrified by the coming slow stable adaptive eons, and tried passionately, but unsuccessfully, to prevent them.

Yeah. I guess I don't ultimately understand the psychology that can write that and not fight fanatically to the last breath to prevent the dark vision from coming to pass.

How awful would things have to be before you would fight to stop it? Before you would do more than sigh in resignation? If no one were ever happy or sad, if no one ever again told a story or bothered to imagine that things could have been different, would that be awful enough?

Are the people who try and change the future, people who you are not comfortable affiliating yourself with? Is it not the "role" that you play in your vision of your life? Or is it really that the will to protect is so rare in a human being?


Robin Hanson12 years ago

This vision really isn’t that dark for me. It may not be as bright as the unicorns and fairies that fill dream-time visions, but within the range of what seems actually feasible, I’d call it at least 90% of the way from immediate extinction to the very best possible.


Carl Shulman12 years ago

I see a worrying pattern here. Robin thinks the hyper-Malthusian scenario is amazingly great and that efforts to globally coordinate to prevent it (and the huge deadweight losses of burning the commons, as well as vast lost opportunities for existing beings) will very probably fail. Others, such as James Hughes and Eliezer and myself, see the Malthusian competitive scenario as disastrous and also think that humans or posthumans will invest extensive efforts (including the social control tech enabled by AI/brain emulations) to avoid the associated losses in favor of a cooperative/singleton scenario, with highish likelihood of success.

It almost seems as though we are modeling the motives of future beings with the option of working to produce global coordination simply by generalizing from our own valuations of the Malthusian scenario.

Tyler Cowen on moral principles and the margins at which we should give them up

Cardiff Garcia: Do you think that most or all public intellectuals should write a treatise like [Stubborn Attachments]?

Tyler Cowen: Only if they want to. But I think they all ought to want to. And if you don't want to, then how can you believe anything? This is foundationalist Tyler coming out again. So you hear all kinds of claims about utilitarianism, about inequality being important or about meritocracy being important, but people never address the question: at what margin am I willing to give up this principle?

That's a great defect in current political discussion. People have a lot of arguments for why their margin is good but they rarely have any arguments for why they stop at that margin. You wanna redistribute? Well maybe, but why don't we redistribute even more? And if the people who oppose the redistribution you favour are evil, why aren't you evil for not proposing even more?

https://thevalmy.com/26 // https://www.ft.com/content/4803dd39-9c14-3a4c-90e4-c9630999660f

State credences as fractions to avoid suggesting false precision

I often find it helpful to think in probabilities and adopt a Bayesian mindset.

As part of that, I sometimes state my subjective probabilities in conversation.

Sometimes people respond skeptically, asking "how can you possibly know that?".

There are several reasons for this. Sometimes, the reason is that they've understood my credence to be much more precise than it is.

Two statements:

(1) I think there's roughly a 70% chance X will happen (2) I think there's roughly a 2/3 chance X will happen

When I say (1), I usally intend to communicate a fairly imprecise credence. That's to say, I mean something equivalent to (2) i.e. I think there's something between a 60-73% chance X will happen. But, despite the word "roughly", I find people—especially people who aren't familiar with the Bayesian mindset—often hear me as expressing a much narrower range, e.g. "68-72% chance".

If I state the probability as a fraction rather than as a percentage, I am forced to specify the denominator, and this carries a clearer implication about the precision of my credence. It becomes more natural to think that, after a small positive update, I would round things off and state the same credence, rather than making a minor adjustment (e.g. "I now think there's a roughly 71% chance) [^1].

I'm writing this down because it's a simple improvement I ought to have internalised by now. And also, because I'm starting to think this comms issue may have confused at least one eminent economist, and perhaps a lot of people in finance.

[^1]: I've seen this effect in personal conversations with n ≈10 people who weren't familiar with the Bayesian mindset. I think there's about a 2/3 chance that it's real. I would have thought that Tetlock or some "science communication" people would have done studies on this, but I can't quickly find them.

Ross Douthat speculates on religious responses to transformative technology

In a world that remains stable and wealthy and scientifically proficient, someone or someones may figure out how to achieve substantial life extension, artificial wombs, genetic selection for unusual strength or speed or appearance, the breeding of human-animal hybrids, and other scenarios that have belonged for years to the realm of “just around the corner” science fiction.

[...]

Human social institutions would change dramatically, to put it mildly, if babies could be grown in vats and people lived to be 125. Our political and economic debates would be scrambled if the rich suddenly began availing themselves of technologies and treatments that seemed to call a shared human nature into question.

[...]

There is a possible future where it becomes clear that the real bottleneck—the real source of temporary technological decadence—wasn’t technological proficiency so much as a dearth of societal ambition and centralized incentives and unsqueamishness about moral considerations. In this scenario, it would be Chinese scientists, subsidized and encouraged by an ambitious government and unencumbered by residual Christian qualms about certain forms of experimentation, who take the great leap from today’s CRISPR work to tomorrow’s more-than-human supermen—and suddenly the world would enter a scenario out of the original Star Trek timeline and its predicted Eugenics Wars, with a coterie of Khan Noonien Singhs in power in Beijing and the rest of the world trying to decide whether to adapt, surrender, or resist.

[...]

Indeed, you wouldn’t necessarily need the huge AI or biotech leap: even a refinement of existing virtual reality, one that draws more people more fully into the violence and pornography of unreal playgrounds, would seem to demand a religious response, even a dramatic one. Perhaps not quite a jihad… but at the very least an effort to tame and humanize the new technologies of simulation that escapes current culture war categories sufficiently to usher in a different—at long last—religious era than the one the baby boomers made.

[...]

Two things that certain modern prejudices assume can’t be joined together: scientific progress and religious revival. It isn’t just that some technological advance could, as suggested above, act in creative tension with religion by provoking a moral crusade or jihad in response. It’s also that scientific and religious experiments proceed from a similar desire for knowing, a similar belief that the universe is patterned and intelligible and that its secrets might somehow be unlocked. Which is why in periods of real intellectual ferment and development, there is often a general surge of experimentation that extends across multiple ways of seeking knowledge, from the scientific and experimental, to the theological and mystical, to the gray zones and disputed territories in between. Thus, the assumption, common to rationalists today, that religion represents a form of unreason that science has to vanquish on its way to new ages of discovery, is as mistaken as the religious reflex that regards the scientific mind-set as an inevitable threat to the pious simplicities of faith.

[...]

Much as the relationship between science and religion can be adversarial, there can also be a mysterious alchemy between the two forms of human exploration.

The Decadent Society, Chapter 10: Renaissance

Very Big If True, therefore probably wrong and/or crazy (part 1)

So you think you've had an extremely important insight which is widely underrated. In that case, a sensible early reaction is to pause, and to ask:

  1. Am I missing something?
  2. Am I crazy?

Often, (1) will turn out to be true. You may be able to work this out for yourself, or you may need to do it through research and conversation.

Less often, (2) will be your problem. That's beyond the scope of this blog post.

Sometimes, though, you will actually be on to something—probably not exactly right, but at least on the right track.

One promising sign is that you have a good story to tell about:

  1. What "edge"—or stroke of good fortune—might explain how I've had this insight, yet others haven't? `` For example, if you are Leo Szilard, you're unusually likely to have "big if true" type insights about nuclear physics. You're a trained physicist, and one of your peers just annoyed you.

If you think your insight is worth pursuing, your early steps should be taken in the spirit of "testing". You should make great effort to expose your insight to scrutiny, and only gradually scale up the bets you place on it. If you are misguided, you want someone to show you why, as soon as possible. It is critical to hold onto the "why this is wrong?" attitude, rather than absorbing the idea into your identity [1].

Holden Karnofsky is an interesting example of someone doing this well, somewhat in public, right now.

Who else?

I haven't finished this blog post. But I'm committed to publishing something every day, so here goes. Let's call it "part 1".

[1] It's also critical to ask: is this insight an information hazard? Most insights have mixed consequences when they become widely known—some may have very strongly negative consequences, on net. The example of Szilard comes to mind again.

Tyler Cowen on Straussian truths and rational choice ethics

The truths of literature and what you might call “the Straussian truths of the great books”—what you get from Homer or Plato—are at least as important rational choice ethics. But the people who do rational choice ethics don’t think that. If the two perspectives aren’t integrated, it leads to absurdities— problems like fanaticism, the Repugnant Conclusion, and so on. Right now though, rational choice ethics is the best we have—the problems of, e.g., Kantian ethics seem much, much worse.

If rational choice ethics were integrated with the “Straussian truths of the great books,” would it lead to different decisions? Maybe not—maybe it would lead to the same decisions with a different attitude. We might come to see rational choice ethics as an imperfect construct, a flawed bubble of meaning that we created for ourselves, and shouldn’t expect to keep working in unusual circumstances.

Tyler Cowen, interviewed by Nick Beckstead (2014)

Tyler Cowen on our local cone of value, and the name of his book

How do you weigh the interests of humans versus animals or creatures that have very little to do with human beings? I think there’s no answer to that. The moral arguments of Stubborn Attachments — they’re all within a cone of sustainable growth for some set of beings. And comparing across beings, I don’t think anyone has good moral theories for that.

https://80000hours.org/podcast/episodes/tyler-cowen-stubborn-attachments/

If you think one of the next things growth will do is make us fundamentally different through something like genetic engineering or drugs… we're not then animals but we could then be fundamentally different beings that are outside of the moral cone I'm used to operating in and then I'm back to not knowing how to evaluate it. There's some like common moral cone that has the humans not the hamsters, future growth could push what are now humans outside that cone and then I think I'm back to a kind of incommensurability. Meantime I think in terms of expected value—if it doesn't happen we'll just be better off, if it does we're not sure, so full steam ahead is where I'm at. But I do think about this quite a bit.

https://thevalmy.com/27

Russ Roberts: Why did you call this book Stubborn Attachments?

Tyler Cowen: The idea that we as humans have stubborn attachments to other people, to ideas and to schemes and to our own world and then trying to create a framework that can make sense of those and tell people it's rational and they ought to double down on their best stubborn attachments and that that's what makes life meaningful and creates this cornucopia in which moral and ethical philosophy can actually make sense and give us some answers.

https://pca.st/bHUr

stubborn attachments cover

Plato was good at wrestling

The name "Plato" means "broad-shouldered" ^1.

Diogenes Laertius claims that "Plato" was a nickname, given by his wrestling coach. He probably made that up ^2, but all sources seem to agree that Plato excelled at physical exercise and was well known for his wrestling ability.

Ruth Chang on parity, agency and rational identity

What is it to be a rational agent? The orthodox answer to this question can be summarized by a slogan: Rationality is a matter of recognizing and responding to reasons. But is the orthodoxy correct? In this paper, I explore an alternative way of thinking about what it is to be a rational agent according to which a central activity of rational agency is the creation of reasons. I explain how the idea of metaphysical grounding can help make sense of the idea that as rational agents we can, quite literally, create reasons. I end by suggesting a reason to take this alternative view of rational agency seriously. The orthodoxy faces a challenge: how do rational agents make choices within ‘well-formed choice situations’? By allowing that we have the power to create reasons, we have a satisfying and attractive solution to this question.

https://static1.squarespace.com/static/5c4d055b365f02de29e99730/t/5f57600575dd3715b178fb3f/1599561735709/whatrationalagentFINAL.pdf

When a hard choice is substantively hard, the right thing to say, I think, is that the alternatives are comparable, but related by some relation beyond ‘better than’, ‘worse than’, and ‘equally good’ – I dub this relation ‘on a par’.

[...]

‘On a par’ and ‘equally good’ are different relations because they have different formal properties. ‘Equally good’ is reflexive – a is as equally good as a — and transitive — if a is as equally as good as b which is as equally good as c, then a is as equally as good as c. ‘On a par’ is irreflexive — a isn’t on a par with itself – a is as equally as good as itself — and nontransitive — if a is on a par with b which is on a par with c, it doesn’t follow that a is on a par with c. But they are both ways in which items can be compared.

[...]

If someone asked me to say what it is for things to be better or equally good, I’d try to describe what those relations involve by describing features of the evaluative differences they denote. If A is better than B, then the evaluative difference between them favors A. If A and B are equally good, then there is a zero evaluative difference between them. If A and B are on a par, there is a non-zero evaluative difference between them, but that difference doesn’t favor one over the other. One reason it’s hard to wrap our minds around the idea of parity – or non-zero, non-favoring evaluative differences – is that we’re so used to understanding value on the model of the reals. Once you assume that value behaves like mass or length, you’re stuck with the view that one value has got to be more, less or equal to another since mass and length can be measured by real numbers, and real numbers must stand in one of those three relations. One of the upshots of entertaining the possibility of parity is that we begin to question at a really fundamental level understanding value in the same way we understand most nonevaluative properties in the world.

[...]

Besides its inviting us to give up our deeply held, implicit conception of value as akin to mass or length with respect to measurability, the most important difference between ‘on a par’ and ‘equally good’ shows up in what we should do, practically speaking, when faced with such alternatives. If alternatives are equally good with respect to what matters in the choice between them, it’s always permissible to flip a coin between them. Not so when things are on a par.

[...]

[Parity] opens up a new way of understanding rational agency that is a substitute for the usual Enlightenment conception according to which we are essentially creatures who discover and respond to reasons. On that view, our agency is essentially passive – our reasons are ones given to us and not made by us. Our freedom as rational agents consists in the discovery of and appropriate response to reasons given to us and not created by us. Parity allows us to see that our agency may have a role in determining what reasons we have in the first place.

The idea instead is that when our reasons are on a par, we have the normative power to create new, ‘will-based’ reasons in favor of one alternative as opposed to another. Take a toy example. You can have the banana split or the chocolate mousse for dessert. They are on a par with respect to deliciousness, which is what matters in the choice between them. You have the normative power to put your agency behind – to ‘will’ – the chocolateyness of the chocolate mousse to be a reason for you to have it, thereby, perhaps, giving yourself most all-things-considered reasons to choose the chocolate mousse. Your act of agency is what makes it the case that you now have most reason to choose the mousse. This is an active view of rational agency because instead of sitting back and discovering what reasons we have, we can create reasons – when our non-will-based reasons – what I call our ‘given’ reasons – are on a par. It’s in this way, I suggest, that we forge our own identities as, say, chocoholics or people who love extreme sports or care about the environment or work to alleviate poverty or any number of things that help define each of us as distinctive rational agents with particular concerns and projects. This is, I think the most interesting way in which we are– as philosophers like to say – the ‘authors of our lives’.

One way to get an intuitive handle on this alternative view of agency is by considering the way you spend your Saturday afternoons. Say you spend yours interviewing philosophers. Could it be true that you have most reason to spend your Saturdays this way, rather than, say, going for walks, learning the piano, or working in a soup kitchen? Probably not. Could it be true that you have sufficient reason to interview philosophers as well as many other things, and you just arbitrarily plump for interviewing philosophers, where this plumping isn’t an exercise of rational agency but the agential equivalent of flipping a coin? Our choices of how to spend our free time don’t always feel that deeply random. What we do instead, on the view I believe parity makes possible, is put ourselves behind one activity rather than another – we identify with it, we commit to it – for the time being perhaps – we take it on as something we’ll do. When we put our agency behind something, it feels like we have most reason to do what we’re doing. And that’s because we have conferred normativity on that activity. Putting your agency behind spending your Saturdays interviewing philosophers is how you make yourself into the distinctive rational agent that you are – someone curious about things philosophical.

[...]

[Kant and Sartre] were each partly right. Some reasons – the will-based ones – have their source in the will but others – the ones that are on a par when will-based reasons can kick in – are ‘given’ to us just as the Enlightenment view says. And only some choices are a matter of agential fiat – for example, ones where our ‘given’ reasons are on a par. Crucially, the existentialists eschewed any possibility of normativity before the act of agential fiat. When we make ourselves into chocoholics or do-gooders or philosophical explorers, we do so in an already-existing normative landscape. Or so I think. So that’s another way my view differs from existentialism.

https://www.3ammagazine.com/3am/the-existentialist-of-hard-choices/

Rational agents have the normative power to create will-based reasons to be in one choice situation rather than another. By creating a reason to be in one among many eligible choice situations, you create the justification for being in that choice situation rather than the others. And as a rational agent who responds to reasons, you can thereby get yourself into that choice situation since you have most reason to be in it. And as we’ve suggested, when you create a reason for yourself to be in one choice situation among others, you put yourself behind that reason. By putting yourself behind that reason, you make yourself into the kind of person who now has most reason to be in that choice situation rather than any others. In this way, the activity of your will allows you to become one kind of agent rather than another, namely, an agent who faces these choice situations and not those. You are the driver of which choice situations – and consequently which reasons – make up the story of your life. 20 By creating reasons for yourself, you form what I have elsewhere called your ‘rational identity’ (Chang 2009, 2013a).

[...]

Return to you lounging on your living room couch. There are a range of eligible choice situations you could be in right now. This range is determined by agential values like autonomy, well-being, and meaning in life. In choice situation A, what matters is getting your homework done well, and your choice is between continuing to read or getting yourself a coffee. In choice situation B, what matters is the suffering of others, and your choice is between writing a check to Oxfam or hopping a plane to volunteer your aid. In choice situation C, what matters is having fun, and your choice is between going to a movie or calling up some friends for a party. All three choice situations are eligible to you right now.

[...]

Which choice situation should you be in? The Passivist orthodoxy has only this to say: you have sufficient reasons to be in any of the three, so just choose. By hypothesis, there is no reason to be in one over the others. But the reasons that render the choice situations eligible on the Passivist View are given reasons. As far as your given reasons are concerned, there is no further justification to be had for being in one choice situation over any others. The Activist View, by contrast, allows that you might create a will-based reason to be in situation A, which then justifies your being in that choice situation. By creating a will-based reason to be in situation A, you thereby make yourself into the sort of person for whom it is true that he has most reason to be in situation A. Your friend, similarly situated, might create a will-based reason for herself to be in situation C. She thereby makes it true of herself that she has most reason to be in situation C. Iterated across a lifetime, you may create a rational identity for yourself as a nerd, and your friend, a party animal. The Activist View gives rational agents the power to craft their own identities as individuals who justifiably face certain sets of choice situations rather than others.

The path we cut through life, among the myriad choice situations rationally open to us, is justified by the will-based reasons we create. Those who champion effective altruism have cut one such path. Those who spend their hours on Wall Street, making as much money as they can in order to live the high life, have cut another. It is only by allowing that there is more to rational agency than recognizing and responding to reasons that we can make sense of how we can be justified in crafting ourselves into the distinctive rational agents we are. Central to being a rational agent is creating reasons for ourselves to be in one choice situation rather than another. By doing so, we can determine for ourselves the reasons we have.

https://static1.squarespace.com/static/5c4d055b365f02de29e99730/t/5f57600575dd3715b178fb3f/1599561735709/whatrationalagentFINAL.pdf

Holden Karnofsky on visualising utopia

We should believe that a glorious future for humanity is possible, and that losing it is a special kind of tragedy.

When every attempt to describe that glorious future sounds unappealing, it's tempting to write off the whole exercise and turn one's attention to nearer-term and/or less ambitious goals.

We may not be able to describe it satisfyingly now, or to agree on it now, and we may have to get there one step at a time - but it is a real possibility, and we should care a lot about things that threaten to cut off that possibility.

[...]

Personally, I don't consider myself able to imagine a utopia very effectively. But I do feel convinced at a gut level that with time and incremental steps, we can build one. I think this particular "faith in the unseen" is ultimately rational and correct.

https://www.cold-takes.com/visualizing-utopia/

Richard Rorty on Proust and Hegel

For quite a while after I read Hegel, I thought that the two greatest achievements of the species to which I belonged were The Phenomenology of Spirit and Remembrance of Things Past (the book which took the place of the wild orchids once I left Flatbrookville for Chicago). Proust's ability to weave intellectual and social snobbery together with the hawthorns around Combray, his grandmother's selfless love, Odette's orchidaceous embraces of Swann and Jupien's of Charlus, and with everything else he encountered – to give each of these its due without feeling the need to bundle them together with the help of a religious faith or a philosophical theory - seemed to me as astonishing as Hegel's ability to throw himself successively into empiricism, Greek tragedy, Stoicism, Christianity and Newtonian physics, and to emerge from each, ready and eager for something completely different. It was the cheerful commitment to irreducible temporality which Hegel and Proust shared – the specifically anti-Platonic element in their work – that seemed so wonderful. They both seemed able to weave everything they encountered into a narrative without asking that that narrative have a moral, and without asking how that narrative would appear under the aspect of eternity.

Trotsky and the Wild Orchids (1992) https://cdclv.unlv.edu/pragmatism/rorty_orchids.html

Richard Rorty on his path from Plato to Hegel to Dewey

About 20 years or so after I decided that the young Hegel's willingness to stop trying for eternity, and just be the child of his time, was the appropriate response to disillusionment with Plato, I found myself being led back to Dewey. Dewey now seemed to me a philosopher who had learned all that Hegel had to teach about how to eschew certainty and eternity, while immunizing himself against pantheism by taking Darwin seriously.

[...]

I decided to write a book about what intellectual life might be like if one could manage to give up the Platonic attempt to hold reality and justice in a single vision. That book - Contingency, Irony and Solidarity – argues that there is no need to weave one's personal equivalent of Trotsky and one's personal equivalent of my wild orchids together. Rather, one should try to abjure the temptation to tie in one's moral responsibilities to other people with one's relation to whatever idiosyncratic things or persons one loves with all one's heart and soul and mind (or, if you like, the things or persons one is obsessed with). The two will, for some people, coincide – as they do in those lucky Christians for whom the love of God and of other human beings are inseparable, or revolutionaries who are moved by nothing save the thought of social justice. But they need not coincide, and one should not try too hard to make them do so. So for example, Jean-Paul Sartre seemed to me right when he denounced Kant's self-deceptive quest for certainty, but wrong when he denounced Proust as a useless bourgeois wimp, a man whose life and writings were equally irrelevant to the only thing that really mattered, the struggle to overthrow capitalism.

Proust's life and work were, in fact, irrelevant to that struggle. But that is a silly reason to despise Proust. It is as wrong-headed as Savonarola's contempt for the works of art he called 'vanities'. Singlemindedness of this Sartrean or Savonarolan sort is the quest for purity of heart – the attempt to will one thing – gone rancid. It is the attempt to see yourself as an incarnation of something larger than yourself (the Movement, Reason, the Good, the Holy) rather than accepting your finitude. The latter means, among other things, accepting that what matters most to you may well be something that may never matter much to most people. Your equivalent of my orchids may always seem merely weird, merely idiosyncratic, to practically everybody else. But that is no reason to be ashamed of, or downgrade, or try to slough off, your Wordsworthian moments, your lover, your family, your pet, your favourite lines of verse, or your quaint religious faith. There is nothing sacred about universality which makes the shared automatically better than the unshared. There is no automatic privilege of what you can get everybody to agree to (the universal) over what you cannot (the idiosyncratic).

This means that the fact that you have obligations to other people (not to bully them, to join them in overthrowing tyrants, to feed them when they are hungry) does not entail that what you share with other people is more important than anything else. What you share with them, when you are aware of such moral obligations, is not, I argued in Contingency, 'rationality' or 'human nature' or 'the fatherhood of God' or 'a knowledge of the Moral Law', or anything other than ability to sympathize with the pain of others. There is no particular reason to expect that your sensitivity to that pain, and your idiosyncratic loves, are going to fit within one big overall account of how everything hangs together. There is, in short, not much reason to hope for the sort of single vision that I went to college hoping to get.

[...]

If I had not read all those books, I might never have been able to stop looking for what Derrida calls 'a full presence beyond the reach of play', for a luminous, self-justifying, self-sufficient synoptic vision.

By now I am pretty sure that looking for such a presence and such a vision is a bad idea. The main trouble is that you might succeed and your success might let you imagine that you have something more to rely on than the tolerance and decency of your fellow human beings. The democratic community of Dewey's dreams is a community in which nobody imagines that. It is a community in which everybody thinks that it is human solidarity, rather than knowledge of something not merely human, that really matters. The actually existing approximations to such a fully democratic, fully secular community now seem to me the greatest achievements of our species. In comparison, even Hegel's and Proust's books seem optional, orchidaceous extras.

Trotsky and the Wild Orchids (1992) https://cdclv.unlv.edu/pragmatism/rorty_orchids.html

Bernard Williams reviews Nagel on reason

Who, in these discussions, are “we”? Is every claim to the effect that our understandings are relative to “us” equally threatening? When we reflect on what “we” believe, particularly in cultural and ethical matters, we often have in mind (as the relativists do) ourselves as members of modern industrial societies, or of some yet more restricted group, as contrasted with other human beings at other times or places. Such a “we” is, as linguists put it, “contrastive”—it picks out “us” as opposed to others. But “we” can be understood inclusively, to embrace anyone who does, or who might, share in the business of investigating the world. Some philosophers have suggested that in our thought there is always an implied “we” of this inclusive kind; according to them, when cosmologists make claims about what the universe is like “in itself,” they are not abstracting from possible experience altogether, but are implicitly talking about the way things would seem to investigators who were at least enough like us for us to recognize them, in principle, as investigators.

[...]

What is really disturbing [...] about the relativists and subjectivists is [...] their insistence on understanding “us” in such a very local and parochial way. [...] They suggest that there are no shared standards on the basis of which we as human beings can understand each other—that there is no inclusive, but only a contrastive, “we.”

[...]

Nagel’s basic idea is that whatever kind of claim is said to be only locally valid and to be the product of particular social forces—whether it is morality that is being criticized in this way, or history, or science—the relativist or subjectivist who offers this critique will have to make some other claim, which itself has to be understood as not merely local but objectively valid. Moreover, in all the cases that matter, this further claim will have to be of the same type as those that are being criticized: the relativists’ critique of morality must commit them to claims of objective morality, their attempts to show that science consists of local prejudice must appeal to objective science, and so on.

[...]

The basic idea that we see things as we do because of our historical situation has become [...] so deeply embedded in our outlook that it is rather Nagel’s universalistic assumption which may look strange, the idea that, self-evidently, moral judgment must take everyone everywhere as equally its object.

[...]

We should not forget that the style of philosophy to which Kant self-consciously opposed his critique he called dogmatic philosophy, meaning that it took the supposed deliverances of reason at their face value, without asking how they were grounded in the structure of human thought and experience. [...] In the spirit of Kant’s distinction, [Nagel's approach] is dogmatic, because it is not interested enough in explanations. It draws, as it seems to me, arbitrary limits to the reflective questions that philosophy is allowed to ask.

https://www.nybooks.com/articles/1998/11/19/the-end-of-explanation/