Conservative progressivism
Peter Singer famously argues that it’s difficult to come up with criteria to explain why killing babies is wrong without those criteria also entailing that killing many kinds of animals is also wrong.
A friend expressed scepticism about this argument by saying:
- I just think that killing newborn babies is wrong. It’s obvious.
- Saying this is no more dogmatic than Singer’s choice of criteria for justifying moral concern.
I somewhat bungled my reply, so I’m writing a better version here.
My friend claims that proposition (1) is self-justifying, i.e. it needs no further justification. He did not explain how he thinks he knows this.
One might think that nearly everyone has a strong intuition that killing babies is wrong, so the need to supply further justification is weak, or null. That’s true now in the West, but historically false.1
When we say a belief is self-justifying, we run into trouble if others disagree. Maybe we can persuade them by illustration or example or something, but often we’ll reach an impasse where the only option is a biff on the nose.
Consider a more controversial claim:
- Killing pigs is wrong.
There, it seems to me, we do want to ask “why”?
And then we get into all the regular questions about criteria for moral consideration.
And then we might think: ok, do those criteria apply to babies?
And then we get to the thing that Singer noticed: that it’s hard to explain why we should not kill babies without citing criteria that also apply to many animals.
A natural thought, of course, is just: “well, babies are humans!” But what, exactly, makes humans worthy of special treatment? And: how special, exactly?
Impartial utilitarians deny that humans should get special treatment just because they’re humans. Instead they’ll appeal to things like consciousness and sentience and self-awareness and richness of experience and social relations and future potential and preferences and so on (and they’ll usually claim these are most developed in humans compared to other animals). They usually conclude that humans often deserve priority over other animals, but they deserve it because of these traits, not just because they are human. To privilege humans just because they are human is “specisism”, a vice akin to racism.
I take the impartial utilitarian view seriously, but moderate it with two commitments 2:
- Conservatism: I give greater value to things that already exist (over potential replacements), simply because they already exist.
- Loyalty: you owe allegiance to the groups of which you’re a part.
I can say some things in support of these claims, but with Singer I would probably reach an impasse. He would probably agree that (a) and (b) have pragmatic value, but deny that the world is made better by having more of (a) and (b), assuming all else equal. Our disagreements might come down to metaethics, specifically to moral epistemology. The impasse is deeper down.
So that’s the sense in which my friend is right, that ultimately these things come down to principles we judge as more plausible than others, and your ability to justify your plausibility judgements to others may be limited. The basis of our moral judgments is never entirely selfless, but partly an expression of who and what we are. And we are not all the same. So sometimes we biff each other on the nose.
I’d guess that >10 billion people have lived in societies where infanticide was acceptable.↩︎
I don’t think these commitments are strong enough to avoid the view that a technologically mature society should convert most matter into utilitroinium. But they may be strong enough to say that humans or human-descendents should be granted at least a small fraction of the cosmic endowment to flourish by their own lights, however inefficiently…↩︎
Writing moral philosophy metaethics