Tetlock vs King & Kay on probabilistic reasoning

A cartoon dialog:

Philip Tetlock: Vague verbiage is a big problem.”

Mervyn King & John Kay: Over-confidence and false-precision is a big problem.”

Philip Tetlock: Calibration training and explicitly quantifying credences makes decisions better on average.”

Mervyn King & John Kay: Explicitly quantifying credences makes decisions worse on average.”

Is there more to it than this?

King and Kay sometimes speak as though there is a deep theoretical disagreement. But I don’t see it.

I have a hard time conceiving of beliefs that are not accompanied by implicit credences. So when someone says you can’t put a subjective probability on that belief”, my first thought is: do I have a choice?”. I can choose whether to explicitly state a point estimate or confidence interval, but if I decide against, my mind will still attach a confidence level to the belief.

In the book Radical Uncertainty”, King and Kay discuss Barack Obama’s decision to authorise the operation that killed Bin Laden:

We do not know whether Obama walked into the fateful meeting with a prior probability in his mind: we hope not. He sat and listened to conflicting accounts and evidence until he felt he had enough information — knowing that he could expect only limited and imperfect information — to make a decision. That is how good decisions are made in a world of radical uncertainty, as decision-makers wrestle with the question What is going on here?’

My first reaction is: of course Obama had prior probabilities in his mind. If he didn’t, that’d be a state of total ignorance, which isn’t something we should hope for.

King and Kay’s later emphasis on What is going on here?” makes me think that what they really mean is they don’t want Obama to come into the meeting blinded by over-confidence (whether that overconfidence stems from a quantative model, or—I would add—from a non-quantitative reference narrative).

When it comes down to it, I think King and Kay should basically just be read as stressing the danger of the what they call The Viniar Problem”:

The mistake of believing you have more knowledge than you do about the real world from the application of conclusions from artificial models

They say this problem runs deep”. They think that The Viniar Problem is often caused or made worse by attempts to think in terms of maximising expected value:

To pretend to optimise in a world of radical uncertainty, it is necessary to make simplifying assumptions about the real world. If these assumptions are wrong—as in a world of radical uncertainty they are almost certain to be—optimisation yields the wrong results, just as someone searching for their keys under the streetlamp because that is where the light is best makes the error of substituting a well-defined but irrelevant problem for the less well-defined problem he actually faces.

Bayesian decision theory tells us what it means to make an ideal decision. But it does not tell us what decision procedures humans should use in a given situation. It certainly does not tell us we should go around thinking about maximising expected value most of the time. And it’s quite compatible with the view that, in practice, we always need to employ rules of thumb and irreducibly non-quantitative judgement calls at some level.

It’s true that expected value reasoning is difficult and easy to mess up. But King and Kay sometimes seem to say, contra Annie Duke, that people should never attempt to think in bets” when it comes to large worlds (viz. complicated real-world situations).

This claim is too strong. Calibration training seems to work, and superforecasters are impressive, even if—per Tim Harford—it’d be better to call them less terrible forecasters”.

So sure—we are not cognitive angels, but serious concern about The Viniar Problem does not entail never put a probability on a highly uncertain event.

Later, King and Kay write:

We cannot act in anticipation of every unlikely possibility. So we select the unlikely events to monitor. Not by some metarational calculation which considers all these remote possibilities and calculates their relative importance, but by using our judgement and experience.

And my thought is: yes, of course.

A good Bayesian never thinks they have the final answer: they continually ask: what is going on here?”, why is this wrong?”, what does my belief predict?”, how should I update on this?”. They usually entertain several plausible perspectives, sometimes perspectives that are in direct tension with each other. And yes, they make meta-rational weighting decisions based on judgement, experience, rule of thumb, animal spirits—not conscious calculation.

All this seems compatible with the practice of putting numbers on highly uncertain events.

In a podcast interview, Joseph Walker asked John Kay to comment on Graham Allison’s famous prediction in Destined for War?. Looking at the historical record, Allison found that in 12 / 16 cases over the past 500 years, when a rising power challenged an established power, the result was war. On that basis he says that war between the US and China is more likely than not”. Kay’s comment:

I think that’s a reasonable way of framing it. What people certainly should not do is say that the probability of war between these two countries is 0.75, i.e. 12 / 16. We distinguish between likelihood, which is essentially an ordinal variable, and probability, which is cardinal.

Wait… so I’m allowed to say more likely than not”, but not greater than 50%” or greater than 1 / 2”?

And… what is this distinction between ordinal” likelihood and cardinal” probability? I searched their book for the words ordinal” and cardinal” and found… zero mentions. And the passages that include likelihood” were not illuminating. The closest thing I could find (but not make sense of):

Discussion of uncertainty involves several different ideas. Frequency — I believe a fair coin falls heads 50% of the time, because theory and repeated observation support this claim. Confidence — I am pretty sure the Yucatán event caused the extinction of the dinosaurs, because I have reviewed the evidence and the views of respected sources. And likelihood — it is not likely that James Joyce and Lenin met, because one was an Irish novelist and the other a Russian revolutionary. My knowledge of the world suggests that, especially before the global elite jetted together into Davos, the paths of two individuals of very disparate nationalities, backgrounds and aspirations would not cross.

In the context of frequencies drawn from a stationary distribution, probability has a clear and objective meaning. When expressing confidence in their judgement, people often talk about probabilities but it is not clear how the numbers they provide relate to the frequentist probabilities identified by Fermat and Pascal. When they ask whether Joyce met Lenin, the use of numerical probability is nonsensical.

The Bayesian would say that expressing 50% confidence in a belief is equivalent to saying in this kind of domain, when I analyse the reasoning and evidence I’ve seen, beliefs like this one will be true about half of the time”.

A skeptical response might insist on radical particularism (i.e. every belief is formed in a different domain, so it’s impossible to build up a relevant track record). I think the performance of superforecasters disproves this.

Overall, my current take is: King and Kay offer a useful warning about the perils of probabilistic reasoning in practice, based on their decades of experience. But the discussion strikes me as hyperbolic, confusing and confused. Am I missing something?

writing rationality decision theory applied epistemology bayesianism philip tetlock mervyn king john kay