Nietzsche on love of truth, life, and perhaps even cultivating the species
Is it any wonder that we finally grow suspicious, lose patience, turn round impatiently? That we learn from this Sphinx how to pose questions of our own? Who is actually asking us the questions here? What is it in us that really wants to ‘get at the truth’?
It is true that we paused for a long time to question the origin of this will, until finally we came to a complete stop at an even more basic question. We asked about the value of this will. Given that we want truth: why do we not prefer untruth? And uncertainty? Even ignorance?
—Beyond Good and Evil, On the Prejudices of Philosophers.
There are some things we now know too well, we knowing ones: oh, how we nowadays learn as artists to forget well, to be good at not knowing! And as for our future, one will hardly find us again on the paths of those Egyptian youths who make temples unsafe at night, embrace statues, and want by all means to unveil, uncover, and put into a bright light whatever is kept concealed for good reasons. No, we have grown sick of this bad taste, this will to truth, to ‘truth at any price’, this youthful madness in the love of truth: we are too experienced, too serious, too jovial, too burned, too deep for that . . . We no longer believe that truth remains truth when one pulls off the veil; we have lived too much to believe this. Today we consider it a matter of decency not to wish to see everything naked, to be present everywhere, to understand and ‘know’ everything.
—The Gay Science, Preface
We do not object to a judgement just because it is false; this is probably what is strangest about our new language. The question is rather to what extent the judgement furthers life, preserves life, preserves the species, perhaps even cultivates the species; and we are in principle inclined to claim that judgements that are the most false (among which are the synthetic a priori judgements)* are the most indispensable to us, that man could not live without accepting logical fictions, without measuring reality by the purely invented world of the unconditional, self-referential, without a continual falsification of the world by means of the number-that to give up false judgements would be to give up life, to deny life. Admitting untruth as a condition of life: that means to resist familiar values in a dangerous way; and a philosophy that dares this has already placed itself beyond good and evil.
—Beyond Good and Evil, On the Prejudices of Philosophers.
Tyler Cowen on Builders vs Nervous Nellies
One of my favourite @tylercowen quotes from my 2020 conversation with him:https://t.co/FvfR3avhx4
— Joseph Walker (@JosephNWalker) October 25, 2021
Thanks to @peterhartree for reminding me of it! pic.twitter.com/FAvZV8Fez3
And the tweet made it to Tuesday assorted links.
Ash Milton on meritocratic elite selection
The middle class thinks in terms of money. Their personal status is not secure. You don’t know quite what car you’re going to get, what school you’ll be able to afford.
If you have your entire society operating with a middle class brain you’ll have a society that feels itself under threat all the time. A strong social climber mentality but not really on the ladders that matter—mainly on signalling ladders. It’ll be a society that does not think in terms of whole of society advancement, but rather in terms of personal advancement.
[…]
An actually good elite system is based on privilege. You have some people who don’t have to worry about their personal standing. And they are able therefore to use that position to worry about society. They are trained to worry on behalf of society.
https://palladiummag.com/2021/09/18/palladium-podcast-64-the-cultivation-of-elites/
Tetlock vs King & Kay on probabilistic reasoning
A cartoon dialog:
Philip Tetlock: “Vague verbiage is a big problem.”
Mervyn King & John Kay: “Over-confidence and false-precision is a big problem.”
Philip Tetlock: “Calibration training and explicitly quantifying credences makes decisions better on average.”
Mervyn King & John Kay: “Explicitly quantifying credences makes decisions worse on average.”
Is there more to it than this?
King and Kay sometimes speak as though there is a deep theoretical disagreement. But I don’t see it.
I have a hard time conceiving of beliefs that are not accompanied by implicit credences. So when someone says “you can’t put a subjective probability on that belief”, my first thought is: “do I have a choice?”. I can choose whether to explicitly state a point estimate or confidence interval, but if I decide against, my mind will still attach a confidence level to the belief.
In the book “Radical Uncertainty”, King and Kay discuss Barack Obama’s decision to authorise the operation that killed Bin Laden:
We do not know whether Obama walked into the fateful meeting with a prior probability in his mind: we hope not. He sat and listened to conflicting accounts and evidence until he felt he had enough information — knowing that he could expect only limited and imperfect information — to make a decision. That is how good decisions are made in a world of radical uncertainty, as decision-makers wrestle with the question ‘What is going on here?’
My first reaction is: of course Obama had prior probabilities in his mind. If he didn’t, that’d be a state of total ignorance, which isn’t something we should hope for.
King and Kay’s later emphasis on “What is going on here?” makes me think that what they really mean is they don’t want Obama to come into the meeting blinded by over-confidence (whether that overconfidence stems from a quantative model, or—I would add—from a non-quantitative reference narrative).
When it comes down to it, I think King and Kay should basically just be read as stressing the danger of the what they call “The Viniar Problem”:
The mistake of believing you have more knowledge than you do about the real world from the application of conclusions from artificial models
They say this problem “runs deep”. They think that The Viniar Problem is often caused or made worse by attempts to think in terms of maximising expected value:
To pretend to optimise in a world of radical uncertainty, it is necessary to make simplifying assumptions about the real world. If these assumptions are wrong—as in a world of radical uncertainty they are almost certain to be—optimisation yields the wrong results, just as someone searching for their keys under the streetlamp because that is where the light is best makes the error of substituting a well-defined but irrelevant problem for the less well-defined problem he actually faces.
Bayesian decision theory tells us what it means to make an ideal decision. But it does not tell us what decision procedures humans should use in a given situation. It certainly does not tell us we should go around thinking about maximising expected value most of the time. And it’s quite compatible with the view that, in practice, we always need to employ rules of thumb and irreducibly non-quantitative judgement calls at some level.
It’s true that expected value reasoning is difficult and easy to mess up. But King and Kay sometimes seem to say, contra Annie Duke, that people should never attempt to “think in bets” when it comes to large worlds (viz. complicated real-world situations).
This claim is too strong. Calibration training seems to work, and superforecasters are impressive, even if—per Tim Harford—it’d be better to call them “less terrible forecasters”.
So sure—we are not cognitive angels, but serious concern about The Viniar Problem does not entail never put a probability on a highly uncertain event.
Later, King and Kay write:
We cannot act in anticipation of every unlikely possibility. So we select the unlikely events to monitor. Not by some metarational calculation which considers all these remote possibilities and calculates their relative importance, but by using our judgement and experience.
And my thought is: yes, of course.
A good Bayesian never thinks they have the final answer: they continually ask: “what is going on here?”, “why is this wrong?”, “what does my belief predict?”, “how should I update on this?”. They usually entertain several plausible perspectives, sometimes perspectives that are in direct tension with each other. And yes, they make meta-rational weighting decisions based on judgement, experience, rule of thumb, animal spirits—not conscious calculation.
All this seems compatible with the practice of putting numbers on highly uncertain events.
In a podcast interview, Joseph Walker asked John Kay to comment on Graham Allison’s famous prediction in Destined for War?. Looking at the historical record, Allison found that in 12 / 16 cases over the past 500 years, when a rising power challenged an established power, the result was war. On that basis he says that war between the US and China is “more likely than not”. Kay’s comment:
I think that’s a reasonable way of framing it. What people certainly should not do is say that the probability of war between these two countries is 0.75, i.e. 12 / 16. We distinguish between likelihood, which is essentially an ordinal variable, and probability, which is cardinal.
Wait… so I’m allowed to say “more likely than not”, but not “greater than 50%” or “greater than 1 / 2”?
And… what is this distinction between “ordinal” likelihood and “cardinal” probability? I searched their book for the words “ordinal” and “cardinal” and found… zero mentions. And the passages that include “likelihood” were not illuminating. The closest thing I could find (but not make sense of):
Discussion of uncertainty involves several different ideas. Frequency — I believe a fair coin falls heads 50% of the time, because theory and repeated observation support this claim. Confidence — I am pretty sure the Yucatán event caused the extinction of the dinosaurs, because I have reviewed the evidence and the views of respected sources. And likelihood — it is not likely that James Joyce and Lenin met, because one was an Irish novelist and the other a Russian revolutionary. My knowledge of the world suggests that, especially before the global elite jetted together into Davos, the paths of two individuals of very disparate nationalities, backgrounds and aspirations would not cross.
In the context of frequencies drawn from a stationary distribution, probability has a clear and objective meaning. When expressing confidence in their judgement, people often talk about probabilities but it is not clear how the numbers they provide relate to the frequentist probabilities identified by Fermat and Pascal. When they ask whether Joyce met Lenin, the use of numerical probability is nonsensical.
The Bayesian would say that expressing 50% confidence in a belief is equivalent to saying “in this kind of domain, when I analyse the reasoning and evidence I’ve seen, beliefs like this one will be true about half of the time”.
A skeptical response might insist on radical particularism (i.e. every belief is formed in a different domain, so it’s impossible to build up a relevant track record). I think the performance of superforecasters disproves this.
Overall, my current take is: King and Kay offer a useful warning about the perils of probabilistic reasoning in practice, based on their decades of experience. But the discussion strikes me as hyperbolic, confusing and confused. Am I missing something?
writing rationality decision theory applied epistemology bayesianism philip tetlock mervyn king john kay
Keynes on Economics and the Limits of Decision Theory
We should not conclude from this that everything depends on waves of irrational psychology. On the contrary, the state of long-term expectation is often steady, and, even when it is not, the other factors exert their compensating effects. We are merely reminding ourselves that human decisions affecting the future, whether personal or political or economic, cannot depend on strict mathematical expectation, since the basis for making such calculations does not exist; and that it is our innate urge to activity which makes the wheels go round, our rational selves choosing between the alternatives as best we are able, calculating where we can, but often falling back for our motive on whim or sentiment or chance.
[…]
Even apart from the instability due to speculation, there is the instability due to the characteristic of human nature that a large proportion of our positive activities depend on spontaneous optimism rather than mathematical expectations, whether moral or hedonistic or economic. Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as the result of animal spirits—a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.
The General Theory of Employment, Interest and Money: Chapter 12
quote john maynard keynes rationality decision theory radical uncertainty applied epistemology
Trial and error is great until it kills you
Preserving the institutions that correct errors is more important than getting it right first time.
—David Deutsch
Aus der Kriegsschule des Lebens—was mich nicht umbringt, macht mich stärker.
—Friedrich Nietzsche
David Deutsch, channelling Popper, is right to stress the importance of error correction. But I really hope it is not the only way we can learn. Because sometimes we face “one shot” problems, where we need to get it right first time.
As individuals, if we make a fatal mistake, it’s bad for us, but it’s not the end of humanity. As culture and as species, we “learn” from the mistakes of individuals; our norms and genomes evolve. And by this mechanism, our descendants become less likely—individually—to make fatal mistakes when faced with “one shot” problems. Centrally: individuals develop an ability to detect and avoid situations that involve risk of ruin.
On the most disturbing read, the Vulnerable World Hypothesis involves a claim that we are approaching one or more “one shot” problems, at the species level. If we err, we wipe ourselves out—we don’t get a chance to try again.
If we are on track to develop technologies that generate “one shot” extinction risks, it seems clear that “trial and error” isn’t a sustainable strategy. We probably need to develop our ability—as a species—to detect and steer away from situations that involve risk of ruin. And it’d be nice to do this by design—not by species-level selection.