Joshua Greene’s naturalism
Joshua Greene sees morality as part of the natural world 1. It emerges from evolution, because certain kinds of cooperation are adaptive.
In his breakdown:
- Morality helps individuals cooperate 2. Codes of morality operate within groups, but vary between groups.
- “Metamorality” helps groups cooperate. For groups to cooperate, they need to find a common currency, even if they have quite different codes of morality.
On this picture, both morality and metamorality stem from a practical problem—how can creatures like us flourish, get along, and have many descendants?
What Greene calls “modern ethics” is mostly concerned with the question: how can we improve our metamorality?
Greene, following Jeremy Bentham, thinks that the best candidate for a common currency to support metamorality is: quality of experience. If you 5-whys ask people why they care about something, it often comes down to the quality of their experience—crudely: whether it amounts to pleasure or suffering, an experience they would gladly choose or strive to avoid.
Naturalism and impartiality
Where does the ideal of impartiality fit into this picture?
Moral philosophers and people associated with the effective altruism movement often characterise impartiality as:
Giving equal weight to everyone’s interests.
This is often cached out in negative terms, as:
Trying to avoid giving undue weight to particular interests based on (putatively) non-morally-relevant factors such as race, gender, proximity in space and time, nationality, species membership, or substrate.
Usually, such lists are motivated by a positive claim about what kinds of things are morally-relevant, such as capacity for wellbeing, or sentience.
Let’s bracket the question of what our positive claim, and our list of morally-irrelevant factors, should look like. Let’s instead zoom in on the “undue weight” part.
At the level of within-group morality, one can tell a story about how norms of (relatively) equal treatment and fairness could emerge as a stable equilbria, grounded upon (relatively) equal distributions of power in forager societies. 3
At the level of between-group metamorality, we can tell a similar story: groups will be reluctant to cooperate if they think their interests are unduly neglected (though they may in fact accept a bunch of unfair treatement, if their other options—e.g. conflict—seem worse).
So how do we cash out “undue weight” in naturalistic terms? Well, the obvious candidate is: non-adaptive.
In moral philosophy and effective altruism, the “impartiality story” is not told as though it’s based on evolutionary logic of adaptive bargains and long-run equilibria. The story is not “we should be impartial in order to solve our co-operation problems and maximise reproductive fitness”. But rather, the idea is “we should impartially promote the good because… well… that’s the right thing to do”.
This seems like a case where your metaethics, or the story you tell about what we’re up to when we do moral philosophy, rather matters.
Bernard Williams, discussing the limits to impartiality, approvingly quotes Max Stirner:
The tiger who attacks me is in the right, and so am I when I strike him down. I defend against him not my right, but myself.
Williams imagines that humanity is threatened by an alien civilisation, and suggests that in such a situation, it would not be appropriate ask:
- What outcome would maximise value according to our best impartial axiology?
For Williams as for Greene, our impartiality axiology is ultimately a tool that’s supposed to serve us, not an external imperative that we are supposed to serve. Thoughts to the contrary are “a relic of a world not yet fully disenchanted”, i.e. the artefact of a non-secular worldview 4.
Rather, he thinks we should ask:
- What outcome would be best for us (or according to us)?
The choice between (a) and (b) isn’t just academic: we face situations where we must decide which of these questions to ask now, soon, and in the long-run:
- (now) Western liberalism vs Political Islam; West vs China; etc
- (soon?) Digital Minds
- (long-run) Alien civilisations
Naturalism vs non-naturalism; pragmatism and normativity
For the naturalist, moral and metamoral questions are quite empirical. In the current environment, which ideals actually work?
(The normativity within “actually work” boils down to “adaptive fitness”, whether we recognise this or not.)
As Bostrom & Schulman reminded us recently 5, we can reasonably reject the insistence of many philosophers that there’s a fundamental distinction between descriptive and normative claims. After reflecting on Pragmatism last autumn, I became more confident is that this distinction is not, in fact, fundemental. Rather, things are blurry—there’s no such thing as a purely descriptive claim, when we begin our enquiry we’re always already bound up in the normative project of being humans trying to get by in the world, with various aims and agendas baked in 6.
On the naturalistic perspective, we started out trying to solve a practical problem—how can we get along with groups with which, on the surface, we don’t share much in common—and ended up mistakenly thinking of ourselves as doing something else (seeking the (meta)moral truth, then following that). If that’s our self-image, we won’t be so concerned with empirics: we’ll just try whatever our culture and our moral philosophers come up with, and if it “works” in the naturalistic sense, all good. Otherwise, we’ll get wiped out, perhaps gradually, perhaps quickly.
The non-naturalist impartialist thinks there is an external, non-human standard, so they are likely to be more interested in revisionary or revolutionary maximisation—maximisation of whatever they think is valuable, independently of humans—and chafing against the constraints imposed by being the kinds of creatures we find ourselves to be.
On the naturalistic perspective, we’ll be more concerned with thinking about what flavours of impartiality are going to work well for us over the long run. We’d be more inclined to, like the pragmatist, keep coming back to the question: what problem are we actually trying to solve here?
See Moral Tribes or his interview with Sean Carroll.↩︎
“A set of psychological adaptations that allow otherwise selfish individuals to reap the benefits of cooperation.” Moral Tribes p.23↩︎
Derek Parfit, and other non-naturalists, would disagree, but despite some hunting, I’ve not found arguments for non-naturalism that strike me as persuasive.↩︎
Nietzsche was also very clear on this.↩︎