Richard Posner on moral entrepreneurs

Moral entrepreneurs typically try to change the boundaries of altruism, whether by broadening them, as in the case of Jesus Christ and Jeremy Bentham, or by narrowing them, as in the case of Hitler (putting to one side his “zoophilia”). They don’t do this with arguments, or at least good ones. Rather, they mix appeals to self-interest with emotional appeals that bypass our rational calculating faculty and stir inarticulable feelings of oneness with or separateness from the people (or it could be land, or animals) that are to constitute, or be ejected from, the community that the moral entrepreneur is trying to create. They teach us to love or hate whom they love or hate.

The Problematics of Moral and Legal Theory, p.42

Elisabeth Costello - The Philosophers and The Animals

'I want to find a way of speaking to fellow human beings that will be cool rather than heated, philosophical rather than polemical, that will bring enlightenment rather than seeking to divide us into the righteous and the sinners, the saved and the damned, the sheep and the goats.'

[...]

'Both reason and seven decades of life experience tell me that reason is neither the being of the universe nor the being of God. On the contrary, reason looks to me suspiciously like the being of human thought; worse than that, like the being of one tendency in human thought. Reason is the being of a certain spectrum of human thinking. And if this is so, if that is what I believe, then why should I bow to reason this afternoon and content myself with embroidering on the discourse of the old philosophers?'

[...]

'Do I in fact have a choice? If I do not subject my discourse to reason, whatever that is, what is left for me but to gibber and emote and knock over my water glass and generally make a monkey of myself?'

[...]

'Might it not be that the phenomenon we are examining here is, rather than the flowering of a faculty that allows access to the secrets of the universe, the specialism of a rather narrow self-regenerating intellectual tradition whose forte is reasoning, in the same way that the forte of chess players is playing chess, which for its own motives it tries to install at the centre of the universe?'

'Yet, although I see that the best way to win acceptance from this learned gathering would be for me to join myself, like a tributary stream running into a great river, to the great Western discourse of man versus beast, of reason versus unreason, something in me resists, foreseeing in that step the concession of the entire battle.''

[...]

'In the olden days the voice of man, raised in reason, was confronted by the roar of the lion, the bellow of the bull.''

[...]

'I often wonder what thinking is, what understanding is. Do we really understand the universe better than animals do? Understanding a thing often looks to me like playing with one of those Rubik cubes. Once you have made all the little bricks snap into place, hey presto, you understand. It makes sense if you live inside a Rubik cube, but if you don't...'

[...]

'It's been such a short visit, I haven't had time to make sense of why you have become so intense about the animal business.'

She watches the wipers wagging back and forth.

'A better explanation,' she says, 'is that I have not told you why, or dare not tell you. When I think of the words, they seem so outrageous that they are best spoken into a pillow or into a hole in the ground, like King Midas.'

'I don't follow. What is it you can't say?'

'It's that I no longer know where I am. I seem to move around perfectly easily among people, to have perfectly normal relations with them. Is it possible, I ask myself, that all of them are participants in a crime of stupefying proportions? Am I fantasizing it all? I must be mad! Yet every day I see the evidences. The very people I suspect produce the evidence, exhibit it, offer it to me. Corpses. Fragments of corpses that they have bought for money. 'It is as if I were to visit friends, and to make some polite remark about the lamp in their living room, and they were to say, "Yes, it's nice, isn't it? Polish-Jewish skin it's made of, we find that's best, the skins of young Polish-Jewish virgins." And then I go to the bathroom and the soap wrapper says, "Treblinka--100% human stearate." Am I dreaming, I say to myself? What kind of house is this? 'Yet I'm not dreaming. I look into your eyes, into Norma's, into the children's, and I see only kindness, human kindness. Calm down, I tell myself, you are making a mountain out of a molehill. This is life. Everyone else comes to terms with it, why can't you? Why can't you?'

She turns on him a tearful face. What does she want, he thinks? Does she want me to answer her question for her?

They are not yet on the expressway. He pulls the car over, switches off the engine, takes his mother in his arms. He inhales the smell of cold cream, of old flesh. 'There, there,' he whispers in her ear. 'There, there. It will soon be over.'

Elisabeth Costello, Lesson 4 - The Lives of Animals

John Dewey on moral principles

The diffused or wide applicability of habits is reflected in the general character of principles: a principle is intellectually what a habit is for direct action. As habits set in grooves dominate activity and swerve it from conditions instead of increasing its adaptability, so principles treated as fixed rules instead of as helpful methods take men away from experience. The more complicated the situation, and the less we really know about it, the more insistent is the orthodox type of moral theory upon the prior existence of some fixed and universal principle or law which is to be directly applied and followed. Ready-made rules available at a moment's notice for settling any kind of moral difficulty and resolving every species of moral doubt have been the chief object of the ambition of moralists. In the much less complicated and less changing matters of bodily health such pretensions are known as quackery. But in morals a hankering for certainty, born of timidity and nourished by love of authoritative prestige, has led to the idea that absence of immutably fixed and universally applicable ready-made principles is equivalent to moral chaos.

[...]

Morals must be a growing science if it is to be a science at all, not merely because all truth has not yet been appropriated by the mind of man, but because life is a moving affair in which old moral truth ceases to apply. Principles are methods of inquiry and forecast which require verification by the event; and the time honored effort to assimilate morals to mathematics is only a way of bolstering up an old dogmatic authority, or putting a new one upon the throne of the old. But the experimental character of moral judgments does not mean complete uncertainty and fluidity. Principles exist as hypotheses with which to experiment. Human history is long. There is a long record of past experimentation in conduct, and there are cumulative, verifications which give many principles a well earned prestige. Lightly to disregard them is the height of foolishness. But social situations alter; and it is also foolish not to observe how old principles actually work under new conditions, and not to modify them so that they will be more effectual instruments in judging new cases.

Human Nature and Conduct, Part 3: The Place of Intelligence - VII. The Nature of Principles https://brocku.ca/MeadProject/Dewey/Dewey_1922/Dewey1922_21.html

Careless strawman due to inferential distance

A common area of overconfidence: judgements about what other people are thinking or valuing.

I often notice people concluding that someone they have some reasons to admire is either making a naive mistake, or else does not in fact share their values.

Often, the truth is that person 1 is weighing considerations that person 2 does not recognise, or pursuing a strategy person 2 has not thought of. And person 2 is not sufficiently adjusting for this possibility.

The mistake is especially common when two people are separated by greater inferential distance than they realise.

It's natural to assume that, if someone shares your values and is acting reasonably, it would be fairly easy to understand them. But there are strong reasons to doubt this. Everyone knows lots of things that you do not, everyone's brain is trained differently. Another person's reasoning may not be legible to you after a cursory or even a fairly close inspection—or even after you've asked them to state their reasons, and they have tried. Yet it may still be good.

Careless strawmanning can lead us to badly underrate people who—by our own lights—we ought to admire, or at least take seriously. I know many people who severely underestimate figures like Tyler Cowen and Peter Thiel, partly due to this effect.

Things get especially bad for people who tend to overindulge in general cynicism about motives.

The principle of charity is a strong antidote. So is the general attitude of "Most of my judgements are mostly wrong—I must keep attending, I must keep listening".

John Vervaeke on rationality, relevance realisation and insight

You have to shift away from a framing of decision theory or quantitative analysis, because the fundamental problem is overcoming an ill-defined problem to generate a well-posed problem—and then you can have a numerical analysis.

Relevance realisation: relevance is a property that is central to all cognition. We face a combinatorially explosive amount of information both outside of ourselves and within long term memory—there are so many possible ways we could connect and access. If you tried to calculate all that you would never finish. But we're doing that right now. Somehow we ignore most of the irrelevant information, we shrink the problem space down so we are very often making the right connections, doing what's appropriate in the situation. And you also have a capacity for correcting that.

I take the phenomenon of insight to be a case where you've done the shrinking of the problem, you've done the framing, but you've done it incorrectly, you've zeroed in on the wrong information. And then you have an "aha", you realise you were treating X as irrelevant when its not, or Y as relevant when its not. The process of relevance realisation is dynamic and self-correcting.

The relevance of a proposition is constantly varying even though logical structure is constant.

We have to drop to a bioeconomic level, pay attention to the cost of computation—not just metabolic but also economic, the opportunity cost imposed by the environment. The brain is always trying to evolve how it constrains the problem space. It does this by a process—we argue—analogous to evolution. Variation then selective pressure.

Richard Rorty reviews Truth and Truthfulness by Bernard Williams

Nietzsche said that:

we simply lack any organ for knowledge, for “truth”—we “know” (or believe or imagine) just as much as may be useful in the interests of the human herd.

If you cite this sort of passage from Nietzsche (or similar ones in William James or John Dewey) in order to argue that what we call ‘the search for objective truth’ is not a matter of getting your beliefs to correspond better and better to the way things really are, but of attaining intersubjective agreement, or of attempting to cope better with the world round about us, you are likely to ҄find yourself described as a danger to the health of society: philosophers sympathetic to this line of thought now ҄find themselves called Postmodernists, and are viewed with the same hostility as Spinozists were three hundred years ago. If you agree with Dewey that the search for truth is just a particular species of the search for happiness, you will be accused of asserting something so counter-intuitive that only a lack of intellectual responsibility can account for your behaviour.

Most non-philosophers would regard the choice between correspondence-to-reality and pragmatist ways of describing the search for truth as a scholastic quibble of the kind that only a professor of philosophy could be foolish enough to get excited about.

[...]

Those who grow passionate on one or the other side of arcane and seemingly pointless disputes are struggling with the question of what self-image it would be best for human beings to have. So it is with the dispute about truth that has been going on among the philosophy professors ever since the days of Nietzsche and James. That dispute boils down to the question of whether, in our pursuit of truth, we must answer only to our fellow human beings, or also to something non-human, such as the Way Things Really Are In Themselves.

Nietzsche thought the latter notion was a surrogate for God, and that we would be stronger, freer, better human beings if we could bring ourselves to dispense with all such surrogates: to stop wanting to have ‘reality’ or ‘truth’ on our side.

[...]

[Williams] has derided what he calls ‘the rationalistic theory of rationality’: the claim that rationality consists in obedience to eternal, ahistorical standards. His most widely read book, Ethics and the Limits of Philosophy, mocked Kantian approaches to moral philosophy.

Such remarks will convince many people that Williams has long since gone over to the dark side, and is hardly the right person to mount a defence of truth against the bad guys. Having conceded so much to the opposition, he has to work hard to secure a middle-of-the-road position – to avoid drifting either to the Platonist right or to the pragmatist left.

[...]

[Williams] counts me among the ‘moderate deniers’ – by which he means, I think, that I share many more views with him than with Foucault. But he insists that we moderates ‘as much as the more radical deniers need to take seriously the idea that to the extent that we lose a sense of the value of truth, we shall certainly lose something, and may well lose everything.’

Williams argues that it is essential to the defence of liberalism to believe that the virtue he capitalises as ‘Sincerity’ has intrinsic rather than merely instrumental value. He defends this claim in the course of telling a ‘genealogical story’, one that attempts to ‘give a decent pedigree to truth and truthfulness’. We need such a story, he believes, since the notion of truth might be thought tainted by its associations with Platonism.

[...]

[Williams] makes it ‘a sufficient condition for something (for instance, trustworthiness) to have an intrinsic value that, ҄first, it is necessary (or nearly necessary) for basic human purposes and needs that human beings should treat it as an intrinsic good, and, second, that they can coherently treat it as an intrinsic good.’

[...]

He wants to retain the conviction, common among analytic philosophers who distrust pragmatism, that the quest for truth is not the same thing as the quest for justi҄fication.

[...]

As he rightly suggests, the only answer the pragmatist can give to this question is that the procedures we use for justifying beliefs to one another are among the things that we try to justify to one another. We used to think that Scripture was a good way of settling astronomical questions, and pontifical pronouncements a good way of resolving moral dilemmas, but we argued ourselves out of both convictions. But suppose we now ask: were the arguments we offered for changing our approach to these matters good arguments, or were they just a form of brainwashing? At this point, pragmatists think, our spade is turned. For we have, as Williams himself says in the passage I quoted above, no way to compare our representations as a whole with the way things are in themselves.

Williams, however, seems to think that we philosophy professors have special knowledge and techniques that enable us, despite this inability, to show that the procedures we now think to be truth-acquiring actually are so. ‘The real problems about methods of inquiry, and which of them are truth acquiring . . . belong to the theory of knowledge and metaphysics.’ These disciplines, he assures us, provide answers to ‘the question, for a given class of propositions, of how the ways of ҄finding out whether they are true are related to what it is for them to be true’.

Williams would seem to be claiming that these metaphysicians and epistemologists stand on neutral ground when deciding between various ways of reaching agreement. They can stand outside history, look with an impartial eye at the Reformation, the Scieintific Revolution and the Enlightenment, and then, by applying their own special, specifically philosophical, truth-acquiring methods, underwrite our belief that Europe’s chances of acquiring truth were increased by those events. They can do all this, presumably, without falling back into what Williams scorns as ‘the rationalistic theory of rationality’.

Williams seems to believe that analytic philosophers have scrubbed metaphysics and epistemology clean of Platonism, and are now in a position to explain what makes various classes of propositions true. If there really were such explanations, then our spade would not be turned where the pragmatists think it is. But of course we who are labelled ‘deniers of truth’ do not think there are. We think the sort of metaphysics and epistemology currently practised by analytic philosophers is just as fantastical and futile as Plato’s Theory of Forms and Locke’s notion of simple ideas.

As far as I can see, Williams’s criticism of ‘the indistinguishability argument’ stands or falls with the claim that analytic philosophers really can do the wonderful things he tells us they can – that they are not just hard-working public relations agents for contemporary institutions and practices, but independent experts whose endorsement of our present ways of justifying beliefs is based on a superior knowledge of what it is for various propositions to be true. Williams would have had a hard time convincing Nietzsche, Dewey or the later Wittgenstein that they had any such knowledge.

The historical portion shows Williams at his best – not arguing with other philosophers, but rather, in the manner of Isaiah Berlin, helping us understand the changes in the human self-image that have produced our present institutions, intuitions and problems.

[...]

He concedes to Foucault that ‘the “force of reason” can hardly be separated altogether from the power of persuasion, and, as the ancient Greeks well knew, the power of persuasion, however benignly or rationally exercised, is still a species of power.’ Williams’s appreciation of this Nietzschean point makes him wary of the Habermasian idea of ‘the force of the better argument’, and leads him to conclude the chapter by saying ‘It is not foolish to believe that any social and political order which effectively uses power, and which sustains a culture that means something to the people who live in it, must involve opacity, mystification and largescale deception.’

[...]

Williams has to work hard here to concede just enough to the opposition, but not too much. He needs carefully to distinguish between justified Nietzschean and Foucauldian suspicions about the supporting stories, and unjustified contempt for the Enlightenment’s political hopes. In making this distinction, he takes on the same complicated and delicate assignment previously attempted by Dewey, Weber and many others. He wants to show us how to combine Nietzschean intellectual honesty and maturity with political liberalism – to keep on striving for liberty, equality and fraternity in a totally disenchanted, completely de-Platonised intellectual world.

The prospect of such a world would have appalled Kant, whose defence of the French Revolution was closely linked to his ‘rationalistic theory of rationality’. Kant is the philosopher to whom such contemporary liberals as Rawls and Habermas ask us to remain faithful. Williams, by contrast, turns his back on Kant. So did Dewey. The similarity between Dewey’s and Williams’s conceptions of the desirable self-image for heirs of the Enlightenment is, in fact, very great, so I am all the more puzzled by his hostility to pragmatism in the ҄first half of his book.

https://www.lrb.co.uk/the-paper/v24/n21/richard-rorty/to-the-sunlit-uplands

Nietzsche on love of truth, life, and perhaps even cultivating the species

Is it any wonder that we finally grow suspicious, lose patience, turn round impatiently? That we learn from this Sphinx how to pose questions of our own? Who is actually asking us the questions here? What is it in us that really wants to 'get at the truth'?

It is true that we paused for a long time to question the origin of this will, until finally we came to a complete stop at an even more basic question. We asked about the value of this will. Given that we want truth: why do we not prefer untruth? And uncertainty? Even ignorance?

—Beyond Good and Evil, On the Prejudices of Philosophers.

There are some things we now know too well, we knowing ones: oh, how we nowadays learn as artists to forget well, to be good at not knowing! And as for our future, one will hardly find us again on the paths of those Egyptian youths who make temples unsafe at night, embrace statues, and want by all means to unveil, uncover, and put into a bright light whatever is kept concealed for good reasons. No, we have grown sick of this bad taste, this will to truth, to 'truth at any price', this youthful madness in the love of truth: we are too experienced, too serious, too jovial, too burned, too deep for that . . . We no longer believe that truth remains truth when one pulls off the veil; we have lived too much to believe this. Today we consider it a matter of decency not to wish to see everything naked, to be present everywhere, to understand and 'know' everything.

—The Gay Science, Preface

We do not object to a judgement just because it is false; this is probably what is strangest about our new language. The question is rather to what extent the judgement furthers life, preserves life, preserves the species, perhaps even cultivates the species; and we are in principle inclined to claim that judgements that are the most false (among which are the synthetic a priori judgements)* are the most indispensable to us, that man could not live without accepting logical fictions, without measuring reality by the purely invented world of the unconditional, self-referential, without a continual falsification of the world by means of the number-that to give up false judgements would be to give up life, to deny life. Admitting untruth as a condition of life: that means to resist familiar values in a dangerous way; and a philosophy that dares this has already placed itself beyond good and evil.

—Beyond Good and Evil, On the Prejudices of Philosophers.

Ash Milton on meritocratic elite selection

The middle class thinks in terms of money. Their personal status is not secure. You don't know quite what car you're going to get, what school you'll be able to afford.

If you have your entire society operating with a middle class brain you'll have a society that feels itself under threat all the time. A strong social climber mentality but not really on the ladders that matter—mainly on signalling ladders. It'll be a society that does not think in terms of whole of society advancement, but rather in terms of personal advancement.

[...]

An actually good elite system is based on privilege. You have some people who don't have to worry about their personal standing. And they are able therefore to use that position to worry about society. They are trained to worry on behalf of society.

https://palladiummag.com/2021/09/18/palladium-podcast-64-the-cultivation-of-elites/

Tetlock vs King & Kay on probabilistic reasoning

A cartoon dialog:

Philip Tetlock: "Vague verbiage is a big problem."

Mervyn King & John Kay: "Over-confidence and false-precision is a big problem."

Philip Tetlock: "Calibration training and explicitly quantifying credences makes decisions better on average."

Mervyn King & John Kay: "Explicitly quantifying credences makes decisions worse on average."

Is there more to it than this?

King and Kay sometimes speak as though there is a deep theoretical disagreement. But I don't see it.

I have a hard time conceiving of beliefs that are not accompanied by implicit credences. So when someone says "you can't put a subjective probability on that belief", my first thought is: "do I have a choice?". I can choose whether to explicitly state a point estimate or confidence interval, but if I decide against, my mind will still attach a confidence level to the belief.

In the book "Radical Uncertainty", King and Kay discuss Barack Obama's decision to authorise the operation that killed Bin Laden:

We do not know whether Obama walked into the fateful meeting with a prior probability in his mind: we hope not. He sat and listened to conflicting accounts and evidence until he felt he had enough information – knowing that he could expect only limited and imperfect information – to make a decision. That is how good decisions are made in a world of radical uncertainty, as decision-makers wrestle with the question ‘What is going on here?’

My first reaction is: of course Obama had prior probabilities in his mind. If he didn't, that'd be a state of total ignorance, which isn't something we should hope for.

King and Kay's later emphasis on "What is going on here?" makes me think that what they really mean is they don't want Obama to come into the meeting blinded by over-confidence (whether that overconfidence stems from a quantative model, or—I would add—from a non-quantitative reference narrative).

When it comes down to it, I think King and Kay should basically just be read as stressing the danger of the what they call "The Viniar Problem":

The mistake of believing you have more knowledge than you do about the real world from the application of conclusions from artificial models

They say this problem "runs deep". They think that The Viniar Problem is often caused or made worse by attempts to think in terms of maximising expected value:

To pretend to optimise in a world of radical uncertainty, it is necessary to make simplifying assumptions about the real world. If these assumptions are wrong—as in a world of radical uncertainty they are almost certain to be—optimisation yields the wrong results, just as someone searching for their keys under the streetlamp because that is where the light is best makes the error of substituting a well-defined but irrelevant problem for the less well-defined problem he actually faces.

Bayesian decision theory tells us what it means to make an ideal decision. But it does not tell us what decision procedures humans should use in a given situation. It certainly does not tell us we should go around thinking about maximising expected value most of the time. And it's quite compatible with the view that, in practice, we always need to employ rules of thumb and irreducibly non-quantitative judgement calls at some level.

It's true that expected value reasoning is difficult and easy to mess up. But King and Kay sometimes seem to say, contra Annie Duke, that people should never attempt to "think in bets" when it comes to large worlds (viz. complicated real-world situations).

This claim is too strong. Calibration training seems to work, and superforecasters are impressive, even if—per Tim Harford—it'd be better to call them "less terrible forecasters".

So sure—we are not cognitive angels, but serious concern about The Viniar Problem does not entail never put a probability on a highly uncertain event.

Later, King and Kay write:

We cannot act in anticipation of every unlikely possibility. So we select the unlikely events to monitor. Not by some metarational calculation which considers all these remote possibilities and calculates their relative importance, but by using our judgement and experience.

And my thought is: yes, of course.

A good Bayesian never thinks they have the final answer: they continually ask: "what is going on here?", "why is this wrong?", "what does my belief predict?", "how should I update on this?". They usually entertain several plausible perspectives, sometimes perspectives that are in direct tension with each other. And yes, they make meta-rational weighting decisions based on judgement, experience, rule of thumb, animal spirits—not conscious calculation.

All this seems compatible with the practice of putting numbers on highly uncertain events.

In a podcast interview, Joseph Walker asked John Kay to comment on Graham Allison's famous prediction in Destined for War?. Looking at the historical record, Allison found that in 12 / 16 cases over the past 500 years, when a rising power challenged an established power, the result was war. On that basis he says that war between the US and China is "more likely than not". Kay's comment:

I think that's a reasonable way of framing it. What people certainly should not do is say that the probability of war between these two countries is 0.75, i.e. 12 / 16. We distinguish between likelihood, which is essentially an ordinal variable, and probability, which is cardinal.

Wait... so I'm allowed to say "more likely than not", but not "greater than 50%" or "greater than 1 / 2"?

And... what is this distinction between "ordinal" likelihood and "cardinal" probability? I searched their book for the words "ordinal" and "cardinal" and found... zero mentions. And the passages that include "likelihood" were not illuminating. The closest thing I could find (but not make sense of):

Discussion of uncertainty involves several different ideas. Frequency – I believe a fair coin falls heads 50% of the time, because theory and repeated observation support this claim. Confidence – I am pretty sure the Yucatán event caused the extinction of the dinosaurs, because I have reviewed the evidence and the views of respected sources. And likelihood – it is not likely that James Joyce and Lenin met, because one was an Irish novelist and the other a Russian revolutionary. My knowledge of the world suggests that, especially before the global elite jetted together into Davos, the paths of two individuals of very disparate nationalities, backgrounds and aspirations would not cross.

In the context of frequencies drawn from a stationary distribution, probability has a clear and objective meaning. When expressing confidence in their judgement, people often talk about probabilities but it is not clear how the numbers they provide relate to the frequentist probabilities identified by Fermat and Pascal. When they ask whether Joyce met Lenin, the use of numerical probability is nonsensical.

The Bayesian would say that expressing 50% confidence in a belief is equivalent to saying "in this kind of domain, when I analyse the reasoning and evidence I've seen, beliefs like this one will be true about half of the time".

A skeptical response might insist on radical particularism (i.e. every belief is formed in a different domain, so it's impossible to build up a relevant track record). I think the performance of superforecasters disproves this.

Overall, my current take is: King and Kay offer a useful warning about the perils of probabilistic reasoning in practice, based on their decades of experience. But the discussion strikes me as hyperbolic, confusing and confused. Am I missing something?

Keynes on Economics and the Limits of Decision Theory

We should not conclude from this that everything depends on waves of irrational psychology. On the contrary, the state of long-term expectation is often steady, and, even when it is not, the other factors exert their compensating effects. We are merely reminding ourselves that human decisions affecting the future, whether personal or political or economic, cannot depend on strict mathematical expectation, since the basis for making such calculations does not exist; and that it is our innate urge to activity which makes the wheels go round, our rational selves choosing between the alternatives as best we are able, calculating where we can, but often falling back for our motive on whim or sentiment or chance.

[...]

Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as the result of animal spirits—a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.

The General Theory of Employment, Interest and Money: Chapter 12

Trial and error is great until it kills you

Preserving the institutions that correct errors is more important than getting it right first time.

—David Deutsch

Aus der Kriegsschule des Lebens—was mich nicht umbringt, macht mich stärker.

—Friedrich Nietzsche

David Deutsch, channelling Popper, is right to stress the importance of error correction. But I really hope it is not the only way we can learn. Because sometimes we face "one shot" problems, where we need to get it right first time.

As individuals, if we make a fatal mistake, it's bad for us, but it's not the end of humanity. As culture and as species, we "learn" from the mistakes of individuals; our norms and genomes evolve. And by this mechanism, our descendants become less likely—individually—to make fatal mistakes when faced with "one shot" problems. Centrally: individuals develop an ability to detect and avoid situations that involve risk of ruin.

On the most disturbing read, the Vulnerable World Hypothesis involves a claim that we are approaching one or more "one shot" problems, at the species level. If we err, we wipe ourselves out—we don't get a chance to try again.

If we are on track to develop technologies that generate "one shot" extinction risks, it seems clear that "trial and error" isn't a sustainable strategy. We probably need to develop our ability—as a species—to detect and steer away from situations that involve risk of ruin. And it'd be nice to do this by design—not by species-level selection.

Robin Hanson on the future as reality

The future is not the realization of our hopes and dreams, a warning to mend our ways, an adventure to inspire us, nor a romance to touch our hearts. The future is just another place in space-time. Its residents, like us, find their world mundane and morally ambiguous.

[...]

New habits and attitudes result less than you think from moral progress, and more from people adapting to new situations. So many of your descendants’ strange habits and attitudes are likely to violate your concepts of moral progress; what they do may often seem wrong. Also, you likely won’t be able to easily categorize many future ways as either good or evil; they will instead just seem weird. After all, your world hardly fits the morality tales your distant ancestors told; to them you’d just seem weird. Complex realities frustrate simple summaries, and don’t fit simple morality tales.

The Age of Em: Introduction

Nietzsche's mountain

Those who can breathe the air of my writings know that it is an air of the heights, a strong air. One must be made for it. Otherwise there is no small danger one may catch cold in it. The ice is near, the solitude tremendous . . . Philosophy, as I have so far understood and lived it, means living voluntarily among ice and high mountains—seeking out everything strange and questionable in existence, everything so far placed under a ban by morality.

—Nietzsche, Preface to Ecce Homo

Mountains and summer house in the Westfjords, Iceland

Nietzsche wasn't climbing Parfit's mountain

Andrew Huddlestone offers a good treatment of Parfit on Nietzsche.

My summary:

  • Parfit really wants convergence because he is committed to moral intuitionism in the tradition of Sidgwick. On this picture, epistemic peers in ideal conditions should converge on a cluster of "self-evident" normative axioms—on pain of intractable disagreement about whose "self-evident" intuitions to trust.
    • Compare mathematics, geometry and logic, where there does seem to be a common core of widely shared "self-evident" intuitions, that can be taken as axioms (even though non-Western thought has some variations).
  • Parfit takes Nietzsche seriously, thinks of him as an epistemic peer on par with (e.g. Kant). So he needs to either dissolve merely apparent disagreement, or to explain Nietzsche's disagreement in terms of non-ideal epistemic conditions or mistakes that Nietzsche would have recognised as such, if they were pointed out.
  • Parfit tries to paint Nietzsche as a mixture of (a) more in agreement than he seems and (b) making a couple of basic mistakes. To do this, he strawmans Nietzsche rather badly, making heavy use of unpublished journal fragments (an approach which strikes me as odd... to the point of underhand?).
  • Parfit doesn't give a plausible account of Nietzsche's normative views (anti-egalitarianism; suffering sometimes non-instrumentally good) or his meta-axiological views (Huddlestone thinks they're underdetermined by Nietzsche's writings, but Nietzsche definitely didn't hold the "normativity requires God" thesis which Parfit attributes to him).

Nick Bostrom on the influence of moral prophets

It is not too soon to call for practitioners to express a commitment to safety, including endorsing the common good principle and promising to ramp up safety if and when the prospect of machine superintelligence begins to look more imminent. Pious words are not sufficient and will not by themselves make a dangerous technology safe: but where the mouth goeth, the mind might gradually follow.

Superintelligence, Chapter 15: Crunch Time

And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God. And he hath on his vesture and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS.

—Revelation 19:15

Nick Bostrom on crucial considerations

Strategic analysis is especially needful when we are radically uncertain not just about some detail of some peripheral matter but about the cardinal qualities of the central things. For many key parameters, we are radically uncertain even about their sign—that is, we know not which direction of change would be desirable and which undesirable. Our ignorance might not be irremediable. The field has been little prospected, and glimmering strategic insights could still be awaiting their unearthing just a few feet beneath the surface.

What we mean by “strategic analysis” here is a search for crucial considerations: ideas or arguments with the potential to change our views not merely about the fine-structure of implementation but about the general topology of desirability. Even a single missed crucial consideration could vitiate our most valiant efforts or render them as actively harmful as those of a soldier who is fighting on the wrong side. The search for crucial considerations (which must explore normative as well as descriptive issues) will often require crisscrossing the boundaries between different academic disciplines and other fields of knowledge. As there is no established methodology for how to go about this kind of research, difficult original thinking is necessary.

Superintelligence, Chapter 15: Crunch Time

Holden Karnofsky on bounded commensurability as a way to get "ahead of the curve" on moral values

At Open Philanthropy, we like to consider very hard-core theoretical arguments, try to pull the insight from them, and then do our compromising after that.

And so, there is a case to be made that if you’re trying to do something to help people and you’re choosing between different things you might spend money on to help people, you need to be able to give a consistent conversion ratio between any two things.

So let’s say you might spend money distributing bed nets to fight malaria. You might spend money [on deworming, i.e.] getting children treated for intestinal parasites. And you might think that the bed nets are twice as valuable as the dewormings. Or you might think they’re five times as valuable or half as valuable or ⅕ or 100 times as valuable or 1/100. But there has to be some consistent number for valuing the two.

And there is an argument that if you’re not doing it that way, it’s kind of a tell that you’re being a feel-good donor, that you’re making yourself feel good by doing a little bit of everything, instead of focusing your giving on others, on being other-centered, focusing on the impact of your actions on others,[where in theory it seems] that you should have these consistent ratios.

So with that backdrop in mind, we’re sitting here trying to spend money to do as much good as possible. And someone will come to us with an argument that says, hey, there are so many animals being horribly mistreated on factory farms and you can help them so cheaply that even if you value animals at 1 percent as valuable as humans to help, that implies you should put all your money into helping animals.

On the other hand, if you value [animals] less than that, let’s say you value them a millionth as much, you should put none of your money into helping animals and just completely ignore what’s going on factory farms, even though a small amount of your budget could be transformative.

So that’s a weird state to be in. And then, there’s an argument that goes [...] if you can do things that can help all of the future generations, for example, by reducing the odds that humanity goes extinct, then you’re helping even more people. And that could be some ridiculous comic number that a trillion, trillion, trillion, trillion, trillion lives or something like that. And it leaves you in this really weird conundrum, where you’re kind of choosing between being all in on one thing and all in on another thing.

And Open Philanthropy just doesn’t want to be the kind of organization that does that, that lands there. And so we divide our giving into different buckets. And each bucket will kind of take a different worldview or will act on a different ethical framework. So there is bucket of money that is kind of deliberately acting as though it takes the farm animal point really seriously, as though it believes what a lot of animal advocates believe, which is that we’ll look back someday and say, this was a huge moral error. We should have cared much more about animals than we do. Suffering is suffering. And this whole way we treat this enormous amount of animals on factory farms is an enormously bigger deal than anyone today is acting like it is. And then there’ll be another bucket of money that says: "animals? That’s not what we’re doing. We’re trying to help humans."

And so you have these two buckets of money that have different philosophies and are following it down different paths. And that just stops us from being the kind of organization that is stuck with one framework, stuck with one kind of activity.

[...]

If you start to try to put numbers side by side, you do get to this point where you say, yeah, if you value a chicken 1 percent as much as a human, you really are doing a lot more good by funding these corporate campaigns than even by funding the [anti-malarial] bed nets. And [bed nets are] better than most things you can do to help humans. Well, then, the question is, OK, but do I value chickens 1 percent as much as humans? 0.1 percent? 0.01 percent? How do you know that?

And one answer is we don’t. We have absolutely no idea. The entire question of what is it that we’re going to think 100,000 years from now about how we should have been treating chickens in this time, that’s just a hard thing to know. I sometimes call this the problem of applied ethics, where I’m sitting here, trying to decide how to spend money or how to spend scarce resources. And if I follow the moral norms of my time, based on history, it looks like a really good chance that future people will look back on me as a moral monster.

But one way of thinking about it is just to say, well, if we have no idea, maybe there’s a decent chance that we’ll actually decide we had this all wrong, and we should care about chickens just as much as humans. Or maybe we should care about them more because humans have more psychological defense mechanisms for dealing with pain. We may have slower internal clocks. A minute to us might feel like several minutes to a chicken.

So if you have no idea where things are going, then you may want to account for that uncertainty, and you may want to hedge your bets and say, if we have a chance to help absurd numbers of chickens, maybe we will look back and say, actually, that was an incredibly important thing to be doing.

EZRA KLEIN: [...] So I’m vegan. Except for some lab-grown chicken meat, I’ve not eaten chicken in 10, 15 years now — quite a long time. And yet, even I sit here, when you’re saying, should we value a chicken 1 percent as much as a human, I’m like: "ooh, I don’t like that".

To your point about what our ethical frameworks of the time do and that possibly an Open Philanthropy comparative advantage is being willing to consider things that we are taught even to feel a little bit repulsive considering—how do you think about those moments? How do you think about the backlash that can come? How do you think about when maybe the mores of a time have something to tell you within them, that maybe you shouldn’t be worrying about chicken when there are this many people starving across the world? How do you think about that set of questions?

HOLDEN KARNOFSKY: I think it’s a tough balancing act because on one hand, I believe there are approaches to ethics that do have a decent chance of getting you a more principled answer that’s more likely to hold up a long time from now. But at the same time, I agree with you that even though following the norms of your time is certainly not a safe thing to do and has led to a lot of horrible things in the past, I’m definitely nervous to do things that are too out of line with what the rest of the world is doing and thinking.

And so we compromise. And that comes back to the idea of worldview diversification. So I think if Open Philanthropy were to declare, here’s the value on chickens versus humans, and therefore, all the money is going to farm animal welfare, I would not like that. That would make me uncomfortable. And we haven’t done that. And on the other hand, let’s say you can spend 10 percent of your budget and be the largest funder of farm animal welfare in the world and be completely transformative.

And in that world where we look back, that potential hypothetical future world where we look back and said, gosh, we had this all wrong — we should have really cared about chickens — you were the biggest funder, are you going to leave that opportunity on the table? And that’s where worldview diversification comes in, where it says, we should take opportunities to do enormous amounts of good, according to a plausible ethical framework. And that’s not the same thing as being a fanatic and saying, I figured it all out. I’ve done the math. I know what’s up. Because that’s not something I think.

[...]

There can be this vibe coming out of when you read stuff in the effective altruist circles that kind of feels like [...] it’s trying to be as weird as possible. It’s being completely hard-core, uncompromising, wanting to use one consistent ethical framework wherever the heck it takes you. That’s not really something I believe in. It’s not something that Open Philanthropy or most of the people that I interact with as effective altruists tend to believe in.

And so, what I believe in doing and what I like to do is to really deeply understand theoretical frameworks that can offer insight, that can open my mind, that I think give me the best shot I’m ever going to have at being ahead of the curve on ethics, at being someone whose decisions look good in hindsight instead of just following the norms of my time, which might look horrible and monstrous in hindsight. But I have limits to everything. Most of the people I know have limits to everything, and I do think that is how effective altruists usually behave in practice and certainly how I think they should.

[...]

I also just want to endorse the meta principle of just saying, it’s OK to have a limit. It’s OK to stop. It’s a reflective equilibrium game. So what I try to do is I try to entertain these rigorous philosophical frameworks. And sometimes it leads to me really changing my mind about something by really reflecting on, hey, if I did have to have a number on caring about animals versus caring about humans, what would it be?

And just thinking about that, I’ve just kind of come around to thinking, I don’t know what the number is, but I know that the way animals are treated on factory farms is just inexcusable. And it’s just brought my attention to that. So I land on a lot of things that I end up being glad I thought about. And I think it helps widen my thinking, open my mind, make me more able to have unconventional thoughts. But it’s also OK to just draw a line [...] and say, that’s too much. I’m not convinced. I’m not going there. And that’s something I do every day.

https://www.nytimes.com/2021/10/05/podcasts/transcript-ezra-klein-interviews-holden-karnofsky.html