Maximising whims
A wise man on Twitter once wrote:
I think a place [Effective Altruism] sometimes goes awry is in its emphasis on search for optimality. I strongly doubt that any strategy is sufficiently likely to be “best” that we should label it as such.
I agree.
I think of 80,000 Hours as trying to help people see more of the best opportunities available to them, and then draw a rough circle around some 2-20% of the most promising.
Most of the value comes from drawing that circle. It is hard to draw a great circle, but if you’ve not thought about it much, you can probably draw a better circle (e.g. by thinking about scale, leverage, neglectedness).
How to choose between the opportunities within your circle? We can offer some rules of thumb, but none that turn the decision into a straightforward expected value calculation.
You can try some Fermi estimates, but at a certain point these are just Tarot cards.
Fundamentally, you must summon the resolve to act despite the uncertainty. Keynes:
It is our innate urge to activity which makes the wheels go round, our rational selves choosing between the alternatives as best we are able, calculating where we can, but often falling back for our motive on whim or sentiment or chance.
I think of it as a three step process. Generate ideas, filter as best you can, then… pick on a whim.
80,000 Hours is saying: we can help you generate ideas and filter.
Some wish they could avoid the whim part. But I don’t think that’s an option.
Another wise man:
Many extremely valuable things will never seem credibly optimal. […] [Effective altruism] seems to want to know what people are doing, and why they’re doing it. But those people mostly didn’t understand what or why until after the fact. Darwin thought he was on the Beagle to do geology, Turing working on an esoteric problem in mathematical logic…
“Smart person follows their curiosity and/or tries to solve some problem they’ve bumped into, happens upon a valuable breakthrough” might be a common pattern. If it is, 80,000 Hours can submit this kind of activity as an option to consider in your circle. We can say it might be one of your best options, but I don’t think we can say it is your best option.
It seems like EA sometimes comes accross as offering an unrealistic degree of precision, probably due to the use of maximising language. “Find a much better option”—the realistic aim—might be a clearer slogan than “find the best option”—the regulative ideal.
writing effective altruism 80000 hours