Conservative progressivism

Peter Singer famously argues that it's difficult to come up with criteria to explain why killing babies is wrong without those criteria also entailing that killing many kinds of animals is also wrong.

A friend expressed scepticism about this argument by saying:

(1) I just think that killing newborn babies is wrong. It's obvious. (2) Saying this is no more dogmatic than Singer's choice of criteria for justifying moral concern.

I somewhat bungled my reply, so I'm writing a better version here.

My friend claims that proposition (1) is self-justifying, i.e. it needs no further justification. He did not explain how he thinks he knows this.

One might think that nearly everyone has a strong intuition that killing babies is wrong, so the need to supply further justification is weak, or null. That's true now in the West, but historically false.[^1]

When we say a belief is self-justifying, we run into trouble if others disagree. Maybe we can persuade them by illustration or example or something, but often we'll reach an impasse where the only option is a biff on the nose.

Consider a more controversial claim:

(3) Killing pigs is wrong.

There, it seems to me, we do want to ask "why"?

And then we get into all the regular questions about criteria for moral consideration.

And then we might think: ok, do those criteria apply to babies?

And then we get to the thing that Singer noticed: that it's hard to explain why we should not kill babies without citing criteria that also apply to many animals.

A natural thought, of course, is just: "well, babies are humans!" But what, exactly, makes humans worthy of special treatment? And: how special, exactly?

Impartial utilitarians deny that humans should get special treatment just because they're humans. Instead they'll appeal to things like consciousness and sentience and self-awareness and richness of experience and social relations and future potential and preferences and so on (and they'll usually claim these are most developed in humans compared to other animals). They usually conclude that humans often deserve priority over other animals, but they deserve it because of these traits, not just because they are human. To privilege humans just because they are human is "specisism", a vice akin to racism.

I take the impartial utilitarian view seriously, but moderate it with two commitments [^2]:

a. Conservatism: I give greater value to things that already exist (over potential replacements), simply because they already exist. b. Loyalty: you owe allegiance to the groups of which you're a part.

I can say some things in support of these claims, but with Singer I would probably reach an impasse. He would probably agree that (a) and (b) have pragmatic value, but deny that the world is made better by having more of (a) and (b), assuming all else equal. Our disagreements might come down to metaethics, specifically to moral epistemology. The impasse is deeper down.

So that's the sense in which my friend is right, that ultimately these things come down to principles we judge as more plausible than others, and your ability to justify your plausibility judgements to others may be limited. The basis of our moral judgments is never entirely selfless, but partly an expression of who and what we are. And we are not all the same. So sometimes we biff each other on the nose.

[^1]: I'd guess that >10 billion people have lived in societies where infanticide was acceptable.

[^2]: I don't think these commitments are strong enough to avoid the view that a technologically mature society should convert most matter into utilitroinium. But they may be strong enough to say that humans or human-descendents should be granted at least a small fraction of the cosmic endowment to flourish by their own lights, however inefficiently...

On a similar path, but.

Over the past few years, Joe Carlsmith has published several blog posts that nicely articulate views that I've also arrived at, for similar reasons, before he published the posts [^1]. My own thinking has certainly been influenced by him, but on non-naturalist realism, deep atheism and AI existential risk, and a few other topics in AI and metaethics, I was definitely there-ish before he published. But: I had not written up these views in anything approaching the quality of his blog posts. I'd have found it hard to do so, even with great effort.

What should I make of the fact that one of the best contemporary philosophers is on a similar path on some topics? On the one hand, this is gratifying and encouraging: this is some evidence that (a) my views are correct and (b) that I "have what it takes" to develop my own, somewhat novel views on important topics at the vanguard.

On the other hand, it makes me think "Joe has it covered, and will do a better job than me". This pushes on my long-running concern that spending time on moral philosophy and futurism—which I am constantly drawn to—is mostly self-indulgence on my part; that going "all in" this stuff would mean falling short of my "be useful" aspiration. If I went "all in", I think 90%+ that I'd top out as "good", but not "world class". And: on the face of it, the returns to being merely "good" are pretty low.

Much better, plausibly, to keep the philosophy as a passionate side-project. It feeds into my work as an "ethical influencer", which is one way of thinking about the main impact of my career so far. Plausibly this role—perhaps mixed with some more "actually do the thing" periods—is my sweet spot in the global portfolio.

[^1]: To be clear: Joe also has a lot of fantastic posts which have contained many many "fresh to me" ideas and insights. I read everything he writes.

Holden Karnofsky: the fast takeoff scenario is a key motivation for preemptive AI safety measures

Holden Karnofsky: One of the reasons I’m so interested in AI safety standards is because kind of no matter what risk you’re worried about, I think you hopefully should be able to get on board with the idea that you should measure the risk, and not unwittingly deploy AI systems that are carrying a tonne of the risk, before you’ve at least made a deliberate informed decision to do so. And I think if we do that, we can anticipate a lot of different risks and stop them from coming at us too fast. “Too fast” is the central theme for me.

You know, a common story in some corners of this discourse is this idea of an AI that’s this kind of simple computer program, and it rewrites its own source code, and that’s where all the action is. I don’t think that’s exactly the picture I have in mind, although there’s some similarities.

The kind of thing I’m picturing is maybe more like a months or years time period from getting sort of near-human-level AI systems — and what that means is definitely debatable and gets messy — but near-human-level AI systems to just very powerful ones that are advancing science and technology really fast. And then in science and technology — at least on certain fronts that are the less bottlenecked fronts– you get a huge jump. So I think my view is at least somewhat more moderate than Eliezer’s, and at least has somewhat different dynamics.

But I think both points of view are talking about this rapid change. I think without the rapid change, a) things are a lot less scary generally, and b) I think it is harder to justify a lot of the stuff that AI-concerned people do to try and get out ahead of the problem and think about things in advance. Because I think a lot of people sort of complain with this discourse that it’s really hard to know the future, and all this stuff we’re talking about about what future AI systems are going to do and what we have to do about it today, it’s very hard to get that right. It’s very hard to anticipate what things will be like in an unfamiliar future.

When people complain about that stuff, I’m just very sympathetic. I think that’s right. And if I thought that we had the option to adapt to everything as it happens, I think I would in many ways be tempted to just work on other problems, and in fact adapt to things as they happen and we see what’s happening and see what’s most needed. And so I think a lot of the case for planning things out in advance — trying to tell stories of what might happen, trying to figure out what kind of regime we’re going to want and put the pieces in place today, trying to figure out what kind of research challenges are going to be hard and do them today — I think a lot of the case for that stuff being so important does rely on this theory that things could move a lot faster than anyone is expecting.

I am in fact very sympathetic to people who would rather just adapt to things as they go. I think that’s usually the right way to do things. And I think many attempts to anticipate future problems are things I’m just not that interested in, because of this issue. But I think AI is a place where we have to take the explosive progress thing seriously enough that we should be doing our best to prepare for it

Rob Wiblin: Yeah. I guess if you have this explosive growth, then the very strange things that we might be trying to prepare for might be happening in 2027, or incredibly soon.

Holden Karnofsky: Something like that, yeah. It’s imaginable, right? And it’s all extremely uncertain because we don’t know. In my head, a lot of it is like there’s a set of properties that an AI system could have: roughly being able to do roughly everything humans are able to do to advance science and technology, or at least able to advance AI research. We don’t know when we’ll have that. One possibility is we’re like 30 years away from that. But once we get near that, things will move incredibly fast. And that’s a world we could be in. We could also be in a world where we’re only a few years from that, and then everything’s going to get much crazier than anyone thinks, much faster than anyone thinks.

https://80000hours.org/podcast/episodes/holden-karnofsky-how-ai-could-take-over-the-world/

See also: HK on PASTA.

Are LLMs reasoning or reciting?

The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.

https://arxiv.org/abs/2307.02477

Are LLMs reasoning or reciting?

The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on "counterfactual" task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to a degree, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects of behavior.

https://arxiv.org/abs/2307.02477

Explosive growth from AI automation: A review of the arguments

We examine whether substantial AI automation could accelerate global economic growth by about an order of magnitude, akin to the economic growth effects of the Industrial Revolution. We identify three primary drivers for such growth: 1) the scalability of an AI labor force restoring a regime of increasing returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive increase in output from rapid automation occurring over a brief period of time. Against this backdrop, we evaluate nine counterarguments, including regulatory hurdles, production bottlenecks, alignment issues, and the pace of automation. We tentatively assess these arguments, finding most are unlikely deciders. We conclude that explosive growth seems plausible with AI capable of broadly substituting for human labor, but high confidence in this claim seems currently unwarranted. Key questions remain about the intensity of regulatory responses to AI, physical bottlenecks in production, the economic value of superhuman abilities, and the rate at which AI automation could occur.

https://arxiv.org/pdf/2309.11690.pdf

See also: Sam Hammond's critical discussion.

And note that those most bullish on explosive growth typically only put it at 1/3 before 2100.

Tyler Cowen on Malthus

Whether or not you obsess over the particulars of overpopulation, Malthus’s theory is more broadly one of human pressures on the environment, and the lack of suitable equilibrating mechanisms at anything other than extremely high human costs.

The simplest version of Malthus is an account of how the world runs when all essential factors do not grow at the same rate, and in particular those growth rates diverge in a roughly consistent and sustained manner. At some point one of those factors becomes too scarce and the system crashes, leading to a plunge in living standards and possibly a population crash as well. In this sense Malthus is presenting a general rather than a special case, as it would seem that roughly equal rates of growth for the essential factors is the unusual setting, not the default setting.

[...]

For Malthus it could be said that the idea of equilibrium triumphs over that of progress.

[...]

It is also striking that Malthus was a major influence upon both Charles Darwin and Alfred Russel Wallace and their path breaking theories of evolution varying, and groups of people popping in and out of existence, helped them both formulate their theories of natural selection. Malthus thus helped to drive the very existence of modern evolutionary biology.

via https://econgoat.ai/

Mill on Bentham and Coleridge

Tyler Cowen recommends Mill's essays on Bentham and Coleridge as among the best essays ever written, a great introduction to Mill's thought, and "the most sophisticated perspective on a form of neo-Benthamism today, namely the effective altruism as a movement".

I found the key ideas familiar (partly because Tyler is constantly recommending them), but I was glad to read them from the man himself.

According to Mill, Bentham's chief contribution was to exemplify and spread the idea that we should demand detailed, systematic reasoning in political philosophy. The principle of utility was not original to Bentham, but his attempt to systematically apply it to evaluate existing institutions, and to generate proposals for reform, was singular. Bentham's strength was not in his conclusions, but his approach:

The questioning spirit, the disposition to demand the why of everything, that had gained so much ground and was producing such important consequences in these times was due to Bentham more than to any other source. [...] In this age and this country, Bentham has been the great questioner of things established.

[...]

He was not a great philosopher, but was a great reformer in philosophy. He brought into philosophy something it greatly needed, for lack of which it was at a stand-still. It was not his doctrines that did this, but his way of arriving at them.

Getting back to politics: Mill takes a dim view of Bentham's actual assessments and proposals. He sees Bentham as unusually narrow in thought and sensibility, and remarkably uninterested in the philosophy and political thought of others (he "failed in deriving light from other minds"). One of Bentham's biggest mistakes, according to Mill:

Man is never recognised by him as a being capable of pursuing spiritual perfection as an end, of desiring for its own sake the conformity of his own character to his standard of excellence, without hope of good or fear of evil from any source but his own inward consciousness.

[...]

He only faintly recognises, as a fact in human nature, the pursuit of any other ideal goal for its own sake: • the sense of honour and personal dignity—that feeling of personal exaltation and degradation that acts independently of other people’s opinion or even in defiance of it; • the love of beauty, the passion of the artist; • the love of order, of congruity, of consistency in all things, and conformity to their end; • the love of power, not in the limited form of power over other human beings, but abstract power, the power of making our volitions effective; • the love of action, the thirst for movement and activity, a force with almost as much influence in human life as its opposite, the love of ease.

[...]

Man, that most complex being, is a very simple one in Bentham’s eyes.

My Hansonian side raises an eyebrow: was Bentham more right than Mill on this point?[^1]

[^1]: The big insight of evolutionary theory is that very simple algorithms can generate very complex systems. It's impressive that Bentham saw this possibility, decades before Darwin.

The "it's mostly signalling" model is compatible with the claim that people do, in fact, have motives like those Mill lists above; it does not make them unreal, or factors we can ignore in our political philosophy. And at the normative level, there's nothing to stop us cultivating and doubling down on our dispositions to pursue excellence, even while recognising that those dispositions are rooted in status competition. We can choose to see the motives we have as noble, even if we think the forces that shaped them are not. But—Bentham would ask—how, exactly, can we justify this choice? Why not some other motives?

Conservatives have an easier time here than progressives, because they are willing to reject the question. Elsewhere, Mill tries to justify claims about "higher pleasures" with mostly teleological arguments. These would not satisfy Bentham—teleological arguments appeal to contingent facts about the kind of beings we happen to be, which would strike Bentham as too unprincipled, too contingent, too lacking in selflessness. The pursuit of "higher pleasures" which shapes Mill's progressive ambitions is, ultimately, based on a conservative commitment to a local ideal of high culture and human excellence.


In Mill's reading, Coleridge agrees with Bentham that political philosophers must employ careful reasoning to justify their positions, and laments the tendency of conservatives to overlook this. By contrast, he thinks that progressives tend to overestimate their powers of reason and understanding, and should recognise that the conservative inclination to trust tradition over explicit reasoning has merit. Reformers should be recognise that existing traditions have merits that they do not understand, having been exposed to selection pressures that we can think of as a form of historical and collective reason. Reformers should also recognise, of course, that the reforms they propose will have consequences they cannot foresee.

So—one of the most fundamental disagreement between conservatives and progressives is about how to weigh tradition (historical reason) against explicit reason.

So—yay to Bentham's demand for careful, systematic reasoning in philosophy, but boo to those who forget that, often, tradition is smarter than you are.


There's another narrowness to Bentham's method: reason is about what we have in common. The demand of reason is both an opportunity and a threat, and Coleridge, Mill and the German Romantics all want to resist this demand at some margins.


There's a lot more in both essays, but I'm out of time. I'll close with one of Mill's opening remarks:

Theoretical philosophy, which to superficial people appears so remote from the business of life and the outward interests of men, is in reality the thing on earth that most influences them, and in the long run outweighs every other influence except the ones it must itself obey.

I agree—with emphasis on the last seven words.

Joseph Heath on Kantian evolutionary naturalism (rationality, pragmatism and deontic constraints)

One way to approach the puzzle of deontic constraint is to ask whether rational action necessarily has a consequentialist structure, or whether it can incorporate nonconsequential considerations.

[...]

Unfortunately, many theorists (philosophers and social scientists) have been misled into believing that the technical apparatus of rational choice theory, introduced in order to handle the complications of probabilistic reasoning, is also one that prohibits the introduction of nonconsequential considerations into the agent’s practical deliberations. In other words, it is sometimes thought that decision theorists are necessarily committed to consequentialism, or that consequentialism is simply the expression of Bayesian reasoning, when applied to practical affairs. Deontic constraint, or rule-following behavior, according to this view, is either not mathematically tractable, or else violates some elementary canon of logical consistency.

There is absolutely no reason that a rational choice theorist cannot incorporate deontic constraints—or any other type of rule-following behavior—into a formal model of rational action as utility-maximization (although, in so doing, it would perhaps be prudent to shift away from the vocabulary of utility-maximization toward that of value-maximization, given the close connection in many people’s minds between utility theory and consequentialism). The commitment to consequentialism on the part of many rational choice theorists is the result of a straightforward oversight that arose in the transition from decision theory (which deals with rational choice in nonsocial contexts) to game theory (which deals with social interaction). Early decision theorists adopted a consequentialist vocabulary, but did so in a way that made consequentialism trivially true, and thus theoretically innocuous.

Since I am inclined to put rules on the “preference” rather than the “belief” side of the preference-belief distinction, what really needs to be shown is that the preference through which an agent’s commitment to a rule is expressed may also be rational. In order to do so, it is necessary to challenge the prevailing noncognitivism about preferences, or the view that desires are somewhat less susceptible to rational reevaluation than beliefs.

[...]

My goal is to take what I consider to be some of the best thinking done in the past couple of decades in epistemology and philosophy of language, and show how it “fits” with some of the most important work being done in evolutionary theory, in order to reveal the deep internal connection between rationality and rule-following. One of the major forces aiding and abetting the noncognitive conception of preference, for well over three centuries, has been a commitment to representationalism in the philosophy of mind (i.e., the view that “representation” constitutes a central explanatory concept when it comes to understanding the contentfulness of our mental states).

The alternative strategy, which has recently been developed with considerable sophistication by pragmatist theorists like Robert Brandom, is to start with a set of concepts that are tailor-made for the explanation of human action, and then extend these to explain belief and representation. This is based on the plausible intuition that human action in the world is more fundamental than human thought about the world.

[...]

This analysis serves as the basis for my defense of what I call “the transcendental necessity of morality.”

Reading the philosophical literature, it has come to my attention that “Kantian evolutionary naturalism” is not a particularly well-represented position in the debates over the foundations of human morality. This is a deficiency I hope to remedy. The basic Kantian claim, with respect to moral motivation, is that there is an internal connection between following the rules of morality and being a rational agent.

[...]

I would like to defend the rationality of deontic constraints at the level of action, but am not committed to defending “deontology” as a theory of justification.

[...]

There is also an inclination among moral philosophers to draw a sharp distinction between “moral” and what are called “conventional” obligations, such as rules of etiquette, or “social norms” more generally. I reject this distinction, not because I think morality is conventional, but rather because I follow Emile Durkheim in thinking that all social norms (or “conventions” in this way of speaking) have an implicitly moral dimension.

Sam Altman on The Merge

I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.

https://blog.samaltman.com/the-merge

Vitalik Buterin on superintelligence: merge or die

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems.

[...]

Unless we create a world government powerful enough to detect and stop every small group of people hacking on individual GPUs with laptops, someone is going to create a superintelligent AI eventually - one that can think a thousand times faster than we can - and no combination of humans using tools with their hands is going to be able to hold its own against that. And so we need to take this idea of human-computer cooperation much deeper and further.

A first natural step is brain-computer interfaces. Brain-computer interfaces can give humans much more direct access to more-and-more powerful forms of computation and cognition, reducing the two-way communication loop between man and machine from seconds to milliseconds. This would also greatly reduce the "mental effort" cost to getting a computer to help you gather facts, give suggestions or execute on a plan.

Later stages of such a roadmap admittedly get weird. In addition to brain-computer interfaces, there are various paths to improving our brains directly through innovations in biology. An eventual further step, which merges both paths, may involve uploading our minds to run on computers directly.

[...]

If we want a future that is both superintelligent and "human", one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option. There are also good arguments why this could be a safer AI alignment path: by involving human feedback at each step of decision-making, we reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something totally unaligned with humanity's values on its own.

least implausible option

https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html

Scott Alexander on conservatism self-preservation-against-optimality urge

I think that it might be reasonable to have continuation of your own culture as a terminal goal, even if you know your culture is “worse” in some way than what would replace it. There’s a transhumanist joke – “Instead of protecting human values, why not reprogram humans to like hydrogen? After all, there’s a lot of hydrogen.” There’s way more hydrogen than beautiful art, or star-crossed romances, or exciting adventures. A human who likes beautiful art, star-crossed romances, and exciting adventures is in some sense “worse” than a human who likes hydrogen, since it would be much harder for her to achieve her goals and she would probably be much less happy. But knowing this does not make me any happier about the idea of being reprogrammed in favor of hydrogen-related goals. My own value system might not be objectively the best, or even very good, but it’s my value system and I want to keep it and you can’t take it away from me. I am an individualist and I think of this on an individual level, but I could also see having this self-preservation-against-optimality urge for my community and its values.

(I’ve sometimes heard this called Lovecraftian parochialism, based on H.P. Lovecraft’s philosophy that the universe is vast and incomprehensible and anti-human, and you’ve got to draw the line between Self and Other somewhere, so you might as well draw the line at 1920s Providence, Rhode Island, and call everywhere else from Boston all the way to the unspeakable abyss-city of Y’ha-nthlei just different degrees of horribleness.)

https://slatestarcodex.com/2016/07/25/how-the-west-was-won/

Scott Alexander on liberalism and universal culture

On liberalism

Liberalism is a technology for preventing civil war. It was forged in the fires of Hell – the horrors of the endless seventeenth century religious wars. For a hundred years, Europe tore itself apart in some of the most brutal ways imaginable – until finally, from the burning wreckage, we drew forth this amazing piece of alien machinery. A machine that, when tuned just right, let people live together peacefully without doing the “kill people for being Protestant” thing. Popular historical strategies for dealing with differences have included: brutally enforced conformity, brutally efficient genocide, and making sure to keep the alien machine tuned really really carefully.

https://slatestarcodex.com/2017/06/21/against-murderism/

On "Western" culture vs universal culture

I am pretty sure there was, at one point, such a thing as western civilization. I think it included things like dancing around maypoles and copying Latin manuscripts. At some point Thor might have been involved. That civilization is dead. It summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world.

[...]

“Western medicine” is just medicine that works. It happens to be western because the West had a technological head start, and so discovered most of the medicine that works first. But there’s nothing culturally western about it; there’s nothing Christian or Greco-Roman about using penicillin to deal with a bacterial infection. Indeed, “western medicine” replaced the traditional medicine of Europe – Hippocrates’ four humors – before it started threatening the traditional medicines of China or India. So-called “western medicine” is an inhuman perfect construct from beyond the void, summoned by Westerners, which ate traditional Western medicine first and is now proceeding to eat the rest of the world.

[...]

If western medicine is just medicine that works, soda pop is just refreshment that works.

[...]

Sushi has spread almost as rapidly as Coke. But in what sense has sushi been “westernized”? Yes, Europe has adopted sushi. But so have China, India, and Africa. Sushi is [just] another refreshment that works.

Here’s what I think is going on. Maybe every culture is the gradual accumulation of useful environmental adaptations combined with random memetic drift. But this is usually a slow process with plenty of room for everybody to adjust and local peculiarities to seep in. The Industrial Revolution caused such rapid change that the process become qualitatively different, a frantic search for better adaptations to an environment that was itself changing almost as fast as people could understand it.

The Industrial Revolution also changed the way culture was spatially distributed. When the fastest mode of transportation is the horse, and the postal system is frequently ambushed by Huns, almost all culture is local culture. England develops a culture, France develops a culture, Spain develops a culture. Geographic, language, and political barriers keep these from intermixing too much. Add rapid communication – even at the level of a good postal service – and the equation begins to change. In the 17th century, philosophers were remarking (in Latin, the universal language!) about how Descartes from France had more in common with Leibniz from Germany than either of them did with the average Frenchman or German. Nowadays I certainly have more in common with SSC readers in Finland than I do with my next-door neighbor whom I’ve never met.

Improved trade and communication networks created a rapid flow of ideas from one big commercial center to another. Things that worked – western medicine, Coca-Cola, egalitarian gender norms, sushi – spread along the trade networks and started outcompeting things that didn’t. It happened in the west first, but not in any kind of a black-and-white way. Places were inducted into the universal culture in proportion to their participation in global trade; Shanghai was infected before West Kerry; Dubai is further gone than Alabama. The great financial capitals became a single cultural region in the same way that “England” or “France” had been a cultural region in the olden times, gradually converging on more and more ideas that worked in their new economic situation.

Let me say again that this universal culture, though it started in the West, was western only in the most cosmetic ways. If China or the Caliphate had industrialized first, they would have been the ones who developed it, and it would have been much the same. The new sodas and medicines and gender norms invented in Beijing or Baghdad would have spread throughout the world, and they would have looked very familiar. The best way to industrialize is the best way to industrialize.

Universal culture is the collection of the most competitive ideas and products. Coca-Cola spreads because it tastes better than whatever people were drinking before. Egalitarian gender norms spread because they’re more popular and likeable than their predecessors. If there was something that outcompeted Coca-Cola, then that would be the official soda of universal culture and Coca-Cola would be consigned to the scrapheap of history. 

The only reason universal culture doesn’t outcompete everything else instantly and achieve fixation around the globe is barriers to communication. Some of those barriers are natural – Tibet survived universalization for a long time because nobody could get to it. Sometimes the barrier is time – universal culture can’t assimilate every little hill and valley instantly. Other times there are no natural barriers, and then your choice is to either accept assimilation into universal culture, or put up some form of censorship.

https://slatestarcodex.com/2016/07/25/how-the-west-was-won/

Tyler Cowen on Malthus on vice

So one way to read Malthus is this: if a society is going to have any prosperity at all, the people in that society either will be morally quite bad, or they have to be morally very, very good, good enough to exercise that moral restraint. Alternatively, you can read Malthus as seeing two primary goals for people: food and sex. His accomplishment was to show that, taken collectively, those two goals could not easily be obtainable simultaneously in a satisfactory fashion. In late Freudian terms, you could say that eros/sex amounts to the death drive, but again painted on a collective canvas and driven by economic mechanisms.

Malthus also hinted at birth control as an important social and economic force, especially later in 1817, putting him ahead of many other thinkers of his time. Birth control was widely practiced for centuries through a variety of means, and Malthus unfortunately was not very specific. He did call it “unnatural,” and the mainstream theology of his Anglican church condemned it, as did many other churches. But what did he really think? Was this unnatural practice so much worse than the other alternatives of misery and vice that his model was putting forward? Or did Malthus simply fail to see that birth control could be so effective and widespread as it is today? It doesn’t seem we are ever going to know.

From Malthus’s tripartite grouping of vice, moral restraint, and misery, two things should be clear immediately. The first is why Keynes found Malthus so interesting, namely that homosexual passions are one (partial) way out of the Malthusian trap. The second is that there is a Straussian reading of Malthus, namely that he thought moral restraint, while wonderful, was limited in its applicability. So maybe then vice wasn’t so bad after all? Is it not better than war and starvation?

I don’t buy the Straussian reading as a description of what Malthus really meant. But he knew it was there, and he knew he was forcing you to think about just how bad you thought vice really was. Malthus for instance is quite willing to reference prostitution as one possible means to keep down population. He talks about “men,” and “a numerous class of females,” but he worries that those practices “lower in the most marked manner the dignity of human nature.” It degrades the female character and amongst “those unfortunate females with which all great towns abound, more real distress and aggravated misery are perhaps to be found, than in any other department of human life.”

How bad are those vices relative to starvation and population triage? Well, the modern world has debated that question and mostly we have opted for vice. You thus can see that the prosperity of the modern world does not refute Malthus. We faced the Malthusian dilemma and opted for one of his options, namely vice. It’s just that a lot of us don’t find those vices as morally abhorrent as Malthus did. You could say we invented another technology that (maybe) does not suffer from diminishing returns, namely improving the dignity and the living conditions of those who practice vice. Contemporary college dorms seem pretty comfortable, and they have plenty of birth control, and of course lots of vice in the Malthusian sense. While those undergraduates might experience high rates of depression and also sexual violation, that life of vice still seems far better than life near the subsistence point. I am not sure what Malthus would think of college dorm sexual norms (and living standards!), but his broader failing was that he did not foresee the sanitization and partial moral neutering of what he considered to be vice.

Mapping the debate about desirable futures

We can map talk about desirable futures along several axes. Here are a few:

  1. Axiology: partial or impartial (human prejudice vs view from nowhere).

  2. Metaethics: naturalism vs non-naturalism (orthogonality thesis; alignment problem).

  3. Evolution: fatalism or agency (inevitable vs contingent).

  4. Rationality: ecological vs axiological (maxipok or maximise across the multiverse).

The philosophical questions above inform the more empirical debates about emerging technologies, such as:

  1. Optimal rate of change: slow vs fast (or mixed).

  2. Competition vs governance.

  3. Convergence.

What are some other important axes? What are the most plausible combinations? Where do key thinkers land on these?

I find it surprisingly hard to name more than a handful of people who have written on all of the above in public.

But I'll have a go at placing people on these axes in a forthcoming post.

For now I'll just note that there's too much complexity here. Ultimately we need to distill our views down into some rough rules of thumb and faint guiding stars, then just chart a path through the froth of uncertainty (with wonder, vigour and Yes-saying).

As part of this, we need a vibe. e/acc is naive. Safety-ism lacks charisma. "It's time to build" is good, but tainted by association with Marc's recent screed.

As usual I'm back to Tyler—"be a builder":

Tyler Cowen: Uncertainty should not paralyze you. Try to do your best, pursue maximum expected value, and just avoid the moral nervousness. Be a little Straussian about it. Like here’s a rule, on average it’s a good rule, we’re all gonna follow it. Bravo, go on to the next thing. Be a builder.

Joe Walker: Get on with it?

Tyler Cowen: Yes. Because ultimately the nervous Nellies, they’re not philosophically sophisticated, they’re overindulging in their own neuroticism when you get right down to it. So it’s not like there’s some brute ‘let’s be a builder’ view and then in contrast there’s some deeper wisdom that the real philosophers pursue. It’s: you be a builder or you’re a nervous Nelly. Take your pick. I say be a builder.

Also: be a two-thirds utilitarian.

And: be a Yes-sayer.

Sometimes I think that "get on with it" is the push I need too. Why am I constantly pulled back to philosophy?