Joe Carlsmith on expected utility maximisation

There's nothing special about small probabilities—they're just bigger conditional probabilities in disguise.

[...]

Suppose the situation is: 1000 people are drowning. A is a certainty of saving one of them, chosen at random. B is a 1% chance of saving all of them.

Thus, for each person, A gives them a .1% chance of living; whereas B gives them a 1% chance. So every single person wants you to choose B. Thus: if you’re not choosing B, what are you doing, and why are you calling it “helping people”? Are you, maybe, trying to “be someone who saved someone’s life,” at the cost of making everyone 10x less likely to live? F*** that.

[...]

If, in the face of a predictable loss, it’s hard to remember that e.g. you value saving a thousand lives a thousand times more than saving one, then you can remember, via coin-flips, that you value saving two twice as much as saving one, saving four twice as much as saving two, and so on.

[...]

There's a vibe [...] that's fairly core to my own relationship with EUM: namely, something about understanding your choices as always “taking a stance,” such that having values and beliefs is not some sort of optional thing you can do sometimes, when the world makes it convenient, but rather a thing that you are always doing, with every movement of your mind and body. And with this vibe in mind, I think, it’s easier to get past a conception of EUM as some sort of “tool” you can use to make decisions, when you’re lucky enough to have a probability assignment and a utility function lying around — but which loses relevance otherwise. EUM is not about “probabilities and utilities first, decisions second”; nor, even, need it be about “decisions first, probabilities and utilities second,” as the “but it’s not action-guiding!” objectors sometimes assume. Rather, it’s about a certain kind of harmony in your overall pattern of decisions — one that can be achieved by getting your probabilities and utilities together first, and then figuring out your decisions, but which can also be achieved by making sure your decision-making satisfies certain attractive conditions, and letting the probabilities and utilities flow from there. And in this latter mode, faced with a choice between e.g. X with certainty, vs. Y if heads (and nothing otherwise), one need not look for some independently specifiable unit of value to tally up and check whether Y has at least twice as much of it as X. Rather, to choose Y-if-heads, here, just is to decide that Y, to you, is at least twice as valuable as X.

I emphasize this partly because if – as I did — you turn towards the theorems I’ll discuss hoping to answer questions like “would blah resources be better devoted to existential risk reduction or anti-malarial bednets?”, it’s important to be clear about what sort of answers to expect. There is, in fact, greater clarity to be had, here. But it won’t live your life for you (and certainly, it won’t tell you to accept some particular ethic – e.g., utilitarianism). Ultimately, you need to look directly at the stakes – at the malaria, at the size and value of the future – and at the rest of the situation, however shrouded in uncertainty. Are the stakes high enough? Is success plausible enough? In some brute and basic sense, you just have to decide.

https://handsandcities.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/ https://handsandcities.com/2022/03/18/on-expected-utility-part-2-why-it-can-be-ok-to-predictably-lose/

Dominic Cummings: given they don't take nuclear weapons seriously, never assume they're taking X seriously

For many years I’ve said that a Golden Rule of politics is that, given our leaders don’t take nuclear weapons seriously, never assume they’re taking X seriously and there is a team deployed on X with the incentives and skills to succeed.

People think this is an overstated metaphor but I always meant it literally.

Having explored the nuclear enterprise with deep state officials 2019-20, I can only stress just how extremely literally I mean this Golden Rule.

https://dominiccummings.substack.com/p/people-ideas-machines-ii-catastrophic

Francis Bacon on pragmatism

The roads to human power and to human knowledge lie close together, and are nearly the same; nevertheless, on account of the pernicious and inveterate habit of dwelling on abstractions, it is safer to begin and raise the sciences from those foundations which have relation to practice and let the active part be as the seal which prints and determines the contemplative counterpart.

https://en.wikisource.org/wiki/Novum_Organum/Book_II_(Spedding)

Adam Waytz on The Illusion of Explanatory Depth

If you asked one hundred people on the street if they understand how a refrigerator works, most would respond, yes, they do. But ask them to then produce a detailed, step-by-step explanation of how exactly a refrigerator works and you would likely hear silence or stammering. This powerful but inaccurate feeling of knowing is what Leonid Rozenblit and Frank Keil in 2002 termed, the illusion of explanatory depth (IOED), stating, “Most people feel they understand the world with far greater detail, coherence, and depth than they really do.”

Rozenblit and Keil initially demonstrated the IOED through multi-phase studies. In a first phase, they asked participants to rate how well they understood artifacts such as a sewing machine, crossbow, or cell phone. In a second phase, they asked participants to write a detailed explanation of how each artifact works, and afterwards asked them re-rate how well they understand each one. Study after study showed that ratings of self-knowledge dropped dramatically from phase one to phase two, after participants were faced with their inability to explain how the artifact in question operates. Of course, the IOED extends well beyond artifacts, to how we think about scientific fields, mental illnesses, economic markets and virtually anything we are capable of (mis)understanding.

https://www.edge.org/response-detail/27117

Cass Sunstein and Richard Thaler on libertarian paternalism

Our central empirical claim here has been that in many domains, people’s preferences are labile and ill-formed, and do not predate social and legal contexts. For this reason, starting points and default rules are likely to be quite sticky. Building on empirical work involving rationality and preference formation, we have sketched and defended libertarian paternalism – an approach that preserves freedom of choice but that encourages both private and public institutions to steer people in directions that will promote their own welfare.

Some kind of paternalism, we believe, is likely whenever such institutions set out default plans or options. Unfortunately, many current social outcomes are both random and inadvertent, in the sense that they are a product of default rules whose behaviorshaping effects have never been a product of serious reflection. In these circumstances, the goal should be to avoid arbitrary or harmful consequences and to produce contexts that are likely to promote people’s welfare, suitably defined.

Cass Sunstein, Richard Thaler (2006) 'Preferences, Paternalism, and Liberty' http://journals.cambridge.org/abstract_S135824610605911X

Holden Karnofsky on utility legions

Utilitarianism allows "utility monsters" and "utility legions." A large enough benefit to a single person (utility monster), or a benefit of any size to a sufficiently large set of persons (utility legion), can outweigh all other ethical considerations. Utility monsters seem (as far as I can tell) to be a mostly theoretical source of difficulty, but I think the idea of a "utility legion" - a large set of persons such that the opportunity to benefit them outweighs all other moral considerations - is the root cause of most of what's controversial and interesting about utilitarianism today, at least in the context of effective altruism.

https://forum.effectivealtruism.org/posts/iupkbiubpzDDGRpka/other-centered-ethics-and-harsanyi-s-aggregation-theorem#The_scope_of_utilitarianism

Tyler Cowen on historicism and taking sides

I've become more historicist as I've got older. We in the west are embedded in a society that we should not pull apart and reassemble. We're embedded in some form of common sense morality, there's a history behind us. A lot of things we can't change readily but we can make different alterations at the margin.

I don't have answers to the large scale Parfitian or Rawlsiam or Nozicikian moral questions. I don't think there are absolutes and even if there are I don't think there are many things we can treat as absolutes in real-world decision making.

[...]

I'm not sure there is really a morality across species that are very different and cannot trade with each other. It may be that in some unpleasant way we just have to take sides. And to take the side of a vision of the world that is not just nature but is also humans building... I don't think I can justify it morally but that is the side I will take. Because the alternative is we all go extinct pretty rapidly. I mean you can be a very conscientious vegan but if you look closely at different parts of your life they're actually all pretty morally unacceptable, where you live, the various supply chains you interact with.

I just don't think there's a utilitarian scale where you can add up the insects on one side and the humans on the other. And so I'm on the side of the humans and the other animals we trade with.

https://marginalrevolution.com/marginalrevolution/2022/02/jessica-flanigan-interviews-me-and-i-interview-her-back.html

Fallow period

It is winter. Cold and wet outside, but your apartment is warm. You take your morning walk, do your 20 press ups, and show up at your desk—to write and think, per your plan for the week. But your mind is blank. All you really want to do is sit and read. Or gaze at storm clouds, the rain blowing sideways. Or take another walk.

Eliezer Yudkowsky on naturalism

Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited:  Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?

For so many lives and so much loss to turn on a single event, seems disproportionate.  The Divine Plan ought to make more sense than that.  You can believe in a Divine Plan without believing in God—Karl Marx surely did.  You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum.  It ought not to be allowed.  It's too disproportionate.  Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.

But in the world beyond the reach of God, there isn't any clause in the physical axioms which says "things have to make sense" or "big effects need big causes" or "history runs on reasons too important to be so fragile".  There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.

The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe.  Many who are atheists, still think as if certain things are not allowed.  They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect.  But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors.  There is no particular empirical justification that I happen to have heard of, for doubting this.  The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.

But why not?  What prohibits it?

In the God-universe, God prohibits it.  To recognize this is to recognize that we don't live in that universe.  We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else.  Whatever physics says will happen, will happen.  Absolutely anything, good or bad, will happen.  And there is nothing in the laws of physics to lift this rule even for the really extreme cases, where you might expect Nature to be a little more reasonable.

Tyler Cowen on who should get more status

Nick: You’ve often said that most political disputes are really disputes about who gets status. Nominate a few things or people to which we should give more status?

Tyler: Everyone. Everyone pretty much deserves more status (not Hitler, not mass murderers) but most things are underappreciated and they’re criticized and praise motivates people and helps them have a sense of fitting in and to go around and appreciate and express your appreciation for what you really value, that’s one of the best things you can do with your life.

https://brownpoliticalreview.org/2019/10/bpr-interviews-tyler-cowen/

Nick Bostrom on singletons

Can we trust evolutionary development to take our species in broadly desirable directions?

[...]

What we shall call the Panglossian view maintains that this past record of success gives us good grounds for thinking that evolution (whether biological, memetic, or technological) will continue to lead in desirable directions. This Panglossian view, however, can be criticized on at least two grounds. First, because we have no reason to think that all this past progress was in any sense inevitable‒‐much of it may, for aught we know, have been due to luck. And second, because even if the past progress were to some extent inevitable, there is no guarantee that the melioristic trend will continue into the indefinite future.

[...]

Evolution made us what we are, but no fundamental principle stands in the way of our developing the capability to intervene in the default course of events in order to steer future evolution towards a destiny more congenial to human values.

Directing our own evolution, however, requires coordination. If the default evolutionary course is dystopian, it would take coordinated paddling to turn the ship of humanity in a more favorable direction. If only some individuals chose the eudaemonic alternative while others pursued fitness‐maximization, then, by assumption, it would be the latter that would prevail. Fitness‐maximizing variants, even if they started out as a minority, would be preferentially selected for at the expense of eudaemonic agents, and a process would be set in motion that would inexorably lead to the minimization or disappearance of eudaemonic qualities, and the non‐eudaemonic agents would be left to run the show.

To this problem there are only two possible solutions: preventing non‐eudaemonic variants from arising in the first place, or modifying the fitness function so that eudaemonic traits become fitness‐maximizing.

[...]

A singleton could be a rather minimalist structure that could operate without significantly disrupting the lives of its inhabitants. And it need not prohibit novelty and experimentation, since it would retain the capacity to intervene at a later stage to protect its constitution if some developments turned malignant. Increased social transparency, such as may result from advances in surveillance technology or lie detection, could facilitate the development of a singleton. Deliberate international political initiatives could also lead to the gradual emergence of a singleton, and such initiatives might be dramatically catalyzed by ‘wild card’ events such as a series of cataclysms that highlighted the disadvantages of a fractured world order. It would be a mistake to judge the plausibility of the ultimate development of a singleton on the basis of ephemeral trends in current international affairs. The basic conditions shaping political realities may change as new technologies come online, and it is worth noting that the longterm historical trend is towards increasing scope of human coordination and political integration. If this trend continues, the logical culmination is a singleton.

[...]

A singleton could take a variety of forms and need not resemble a monolithic culture or a hive mind. Within the singleton there could be room for a wide range of different life forms, including ones that focus on non‐eudaemonic goals. The singleton could ensure the survival and flourishing of the eudaemonic types by restricting the ownership rights of non‐eudaemonic entities, by subsidizing eudaemonic activities, by guaranteeing the enforcement of property rights, by prohibiting the creation of agents with human‐unfriendly values or psychopathic tendencies, or in a number of other ways. Such a singleton could guide evolutionary developments and prevent our cosmic commons from going to waste in a first‐come‐first‐served colonization race.

[...]

We do not know that a dystopian scenario is the default evolutionary outcome. Even if it is, and even if the creation of a singleton is the only way to forestall ultimate catastrophe, it is a separate question what policies it makes sense to promote in the here and now. While creating a singleton would help to reduce certain risks, it may at the same time increase others, such as the risk that an oppressive regime could become global and permanent. If our preliminary study serves to draw attention to some possibly non‐obvious considerations and to stimulate more rigorous analytic work, its purpose will have been achieved.

https://www.nickbostrom.com/fut/evolution.html


It could act merely as a subtle enforcer of certain background conditions that could serve, e.g. to guarantee security or to administer some other minimal governmental tasks. [...] When considering the characteristics of a singleton it would be a mistake to assume that it would necessarily possess the attributes commonly associated with large human bureaucracies – rigidity, lack of imagination, inefficiency, a tendency to micro-manage and to expand its own powers, etc. This would be true of some possible singletons but it might not be true of others.

The concept of a singleton is thus much broader and more abstract than concept of a world government. A world government (in the ordinary sense of the word) is only one type of singleton among many.

Nevertheless, all the possible singletons share one essential property: the ability to solve global coordination problems. Intelligent species that develop the capability to solve global coordination problems, such as those listed in the next section, may in the long run develop along very different trajectories than species that lack this capacity.

[...]

A major risk with creating a singleton is that it would turn out to be a bad singleton. Smaller units of decision-making, such as states, can also turn bad. But if a singleton goes bad, a whole civilization goes bad. All the eggs are in one basket.

https://www.nickbostrom.com/fut/singleton.html

Anders Sandberg on transhumanism

Anders Sandberg characterises the essence of transhumanism as:

Questioning the necessity and optimality of the current human condition, as well as suggesting that methods to improve it might be both feasible and desirable. If we were living in a fantasy world transhumanists would no doubt argue in favor of using cutting edge magic to improve life.

Sandberg notes that, in contrast, Nick Bostrom sometimes emphasises "the desire to explore the posthuman realm, the states of being that are currently unavailable to us".

But:

In practice there [are] plenty of transhumanists who are not terribly interested in becoming radically posthuman - a few extra centuries in comfort with enhanced minds and bodies is all right with them. They want to personally explore the nicer reaches of the human modes of being and maybe some near posthuman modes. But the common theme is that they do not see the current limitations as desirable, and think (with varying levels of confidence and evidence) that there are or will be ways of overcoming them.

https://marginalrevolution.com/marginalrevolution/2009/06/what-is-transhumanism.html?commentID=157345040


Elsewhere, Sandberg writes:

All things considered, the human condition has many things going for it. Unfortunately, there are some problems. The need to sleep. Hangovers. Pain. Forgetfulness. Bad judgement. Cruelty. Depression. Ageing. Mortality. To name a few.

One approach is to try to accept these limitations and pains. Learning to live with adversity can sometimes be good for a person—it might teach them perseverance or humility . . . Unfortunately, adversity can also numb, harden, or crush us—and surely we should not just accept cruelty or ignorance as a fact of life.

Another approach is to try to fix our limitations and problems. This is the goal of human enhancement: if we are forgetful, maybe we can improve our brains to forget less, for example by taking drugs that increase neural plasticity. To counteract ageing we might use gene therapy to increase production of enzymes that decline with age, remove aged and ill cells, or add fresh stem cells.

Human enhancement is all around us. The morning coffee or tea contains stimulating caffeine that counteracts sleepiness. Vaccines are a form of collective, global immunity against illnesses we may have never encountered. Less invasively, most of us live our lives with smartphones connecting us to a sizeable fraction of humanity and its knowledge. We are never alone, never lost, never bored, able to record anything. Our medieval ancestors would find our (long, healthy, rich) lives superhuman.

Robin Hanson on world government and collective suicide

If your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide. Your ability to do this is aided by the fact that your mind is internally divided; while in many months part of you wants to commit suicide, it is quite rare for a majority coalition of your mind to support such an action.

[...]

When there are powers large enough that their suicide could take down civilization, then the risk of power suicide becomes a risk of civilization suicide. Even if the risk is low in any one year, over the long run this becomes a serious risk.

[...]

Alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.

This is my tentative best guess for the largest future filter that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”

https://www.overcomingbias.com/2018/11/world-government-risks-collective-suicide.html

Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

https://www.overcomingbias.com/2018/05/two-types-of-future-filters.html

Joe Carlsmith on non-naturalist realism vs anti-realism

Utopia, for [the non-naturalist realist], was always the promise of something more than e.g. joy, love, creativity, understanding — it was the promise of those things, with an extra non-natural sauce on top. A Utopia with no sauce is an empty shell, the ethical analog of a phenomenal zombie. It looks right, but the crucial, invisible ingredient is missing.

How one reacts here is ultimately a question of psychology, not metaphysics. Some robots are envelope-ranking-maximizers, and they won’t change their goals just because the envelope probably doesn’t exist. But I think we should be wary of assuming too quickly that we’re like this.

[...]

[On the anti-realist picture] it’s clearer to me what I’m doing, in deciding whether to help the deer, create Utopia, etc, given the posited metaphysical clarity. I feel somehow more grounded, more like a creature in the real, raw, beautiful world, with my eyes open, taking full responsibility for my actions, and less like I’m somehow playing pretend, seeking guidance from some further realm or set of facts that I secretly suspect does not exist.

[...]

To me, what this currently looks like is a place where the choice about what sort of world to create is in a deep way on us. Just as there is no theistic God to tell us what to do, so there is no further realm of normative facts to tell us, either. We have to choose for ourselves. We have to be, as it were, adults. To stand together, and sometimes alone, amidst the beauty and horror and confusion of the world; to look each other and ourselves in the eye; to try to see as clearly as possible what’s really going on, what our actions really involve and imply and create, what it is we really do, when we do something; and then, to choose.

This isn’t to say we can’t mess up: we can — deeply, terribly, irreparably. But messing up will come down to some failure to understand our actions, ourselves, and each other, and to act on that understanding; some veil between us and the real world we inhabit; not some failure to track, in our decisions, the True Values hidden in some further realm. And when we don’t mess up – when we actually find and build and experience what it is we would seek if we saw with full clarity — what we will get isn’t the world, and therefore something else, the “goodness” or “value” of that world, according to the True Values beyond the world. It will be just the world, just what we and others chose to do, or to try to do, with this extraordinary and strange and fleeting chance, this glimpse of whatever it is that’s going on.

https://handsandcities.com/2021/01/03/the-despair-of-normative-realism-bot/

Bernard Williams on Sidgwick and the ambitions of ethics

My own view is that no ethical theory can render a coherent account of its own relation to practice: it will always run into some version of the fundamental difficulty that the practice of life, and hence also an adequate theory of that practice, will require the recognition of what I have called deep dispositions; but at the same time the abstract and impersonal view that is required if the theory is to be genuinely a theory cannot be satisfactorily understood in relation to the depth and necessity of those dispositions. Thus the theory will remain, in one way or another, in an incoherent relation to practice.

Making Sense of Humanity (1995), "The point of view of the universe: Sidgwick and the Ambitions of Ethics"

John Maynard Keynes on Sidgwick

He never did anything but wonder whether Christianity was true and prove that it wasn't and hope that it was.

[...]

There is no doubt about his moral goodness. And yet it is all so dreadfully depressing- no intimacy, no clear-cut crisp boldness. Oh, I suppose he was intimate but he didn't seem to have anything to be intimate about except his religious doubts. And he really ought to have got over that a little sooner; because he knew that the thing wasn't true perfectly well from the beginning.

Robin Hanson on value drift

Value drift is just a generic problem for humans, ems or AI. It's just what's always happened so far. It's the default for what will happen in the future. If you hate it you're in trouble because it's really really likely. Some people for some reason think that value drift in humans is bounded well, while value drift in machines is not only unbounded but happens quickly and I don't really see the grounds for that. [...] The main thing is that in the past when value drift happened change was so slow that you didn't see it in your lifetime so you didn't worry very much about it. As change gets faster and your lifetimes get longer your life will encompass more value drift. And then whether it's humans or machines or whatever you will see it, and if you don't like it then you will see something you don't like.

Book talk at The Foresight Institute.

John Richardson on Nietzsche's metaethics

[Nietzsche discusses] three main ways of valuing: the body’s, the moral agent’s, and his own. We could also call these animal, human, and superhuman valuing. Each has its own “semantics” (or “intentionality”); that is, its own way of positing its values as good. So Nietzsche has what might look like three separate and inconsistent metaethical positions but that are really three elements in a unified account of valuing.

[...]

Our drives value simply by using signs to steer by (toward). They “see” or “interpret” their values as good just by using them this way. They don’t posit them as “true” to anything outside them. Instead they judge and adjust these signs as they learn how well they “pay off” in expanding power. As we saw, the drives don’t recognize what they’re doing as they value. They don’t see the “frame” of their valuing around their values; they lack the perspectivist truth. But they refrain from the externalist mistake of thinking their values tasked to match real goods outside.

By contrast Nietzsche thinks that our agential valuing does make that externalist posit. This is one of its main impositions on our drive-valuing. In order to “tame” the latter for social life, the habit of obeying external norms needs to be inculcated. It’s to license this habit of obedience that the conviction is gradually ingrained that there are real values outside one’s valuing that one needs to align it toward.

[...]

The historical character of this posit and the way it is overlaid on a deeper valuing that doesn’t make it suggest the contingency of such externalism. They support Nietzsche’s optimism that human can find a way to grow out of what is only a (deeply settled) bad habit.

[...]

Nietzsche’s frequent expressions of [error theory] are unsurprising given what we’ve just seen: they apply to our agential, moral valuing which does indeed claim its values to be real—which they’re not.

[...]

But this error-theory does not apply to the two other ways of valuing in Nietzsche’s scenario. Bodily valuing makes no truth-claim, and his own valuing does, but a different one that (we’ll see) has a chance to be true. Nietzsche denies that all valuing makes the mistake of positing its values as real. And why indeed would he allow our agential-moral valuing to represent valuing in general? Human is “the sick animal” due precisely to the defective way it values. Nietzsche’s return to “natural” values is his effort to bring our conscious and worded values into healthy alignment with our drive-valuing; this will include undoing that false posit.

[...]

Nietzsche justifies his values by direct appeal to the values we already have. He tries to point out values we have without noticing them. The “ought” is supplied not from outside but by what the person values already. He claims only to offer the means by which that valuing will want to improve itself.

By his perspectivism, Nietzsche gives credit to our existing values as the only determiners of what’s good for us. So his appeal is ultimately to these. But our valuing of these values includes a will and ability to improve them, in the two fundamental respects we noticed in §1.4. We will to improve them as signs for power—a will embedded deeply in us just as living things. We also will to improve our values in how well they face the truth—a will bred into us humans and indeed distinctive of our kind. These deep aims function as second-order or meta-values, criteria by which we will to improve our first-order values.

John Richardson, Nietzsche's Values, Chapter 1