The Long-term Significance of Reducing Global Catastrophic Risks

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on global catastrophic risks.

One focus area for the Open Philanthropy Project is reducing global catastrophic risks (such as from pandemics, potential risks from advanced artificial intelligence, geoengineering, and geomagnetic storms). A major reason that the Open Philanthropy Project is interested in global catastrophic risks is that a sufficiently severe catastrophe may risk changing the long-term trajectory of civilization in an unfavorable direction (potentially including human extinction if a catastrophe is particularly severe and our response is inadequate).

One possible perspective on such risks—which I associate with the Future of Humanity Institute, the Machine Intelligence Research Institute, and some people in the effective altruism community who are interested in the very long-term future—is that (a) the moral value of the very long-term future overwhelms other moral considerations; (b) given any catastrophe short of an outright extinction event, humanity would eventually recover, leaving humanity's eventual long-term prospects relatively unchanged. On this view, seeking to prevent potential outright extinction events has overwhelmingly greater significance for humanity's ultimate future than seeking to prevent less severe global catastrophes.

In contrast, the Open Philanthropy Project's work on global catastrophic risks focuses on both potential outright extinction events and global catastrophes that, while not threatening direct extinction, could have deaths amounting to a significant fraction of the world's population or cause global disruptions far outside the range of historical experience. This post explains why I believe this approach is appropriate even when accepting (a) from the previous paragraph, i.e., when assigning overwhelming moral importance to the question of whether civilization eventually realizes a substantial fraction of its long-run potential. While it focuses on my own views, these views are broadly shared by several others who focus on global catastrophic risk reduction at the Open Philanthropy Project, and have informed the approach we're taking.

In brief:

Therefore, when it comes to risks such as pandemics, nuclear weapons, geoengineering, or geomagnetic storms, there is no clear case for focusing on preventing potential outright extinction events to the exclusion of preventing other global catastrophic risks. This argument seems most debatable in the case of potential risks from advanced artificial intelligence, and we plan to discuss that further in the future.

Basic framework and terms

Consider two possible heuristics that could be used when evaluating efforts to reduce global catastrophic risk in a utilitarian-type moral framework:

Throughout this post, I focus on the latter. I discuss two different schools of thought on what the "maximize long-term potential" heuristic implies about the proper strategy for reducing global catastrophic risk. To characterize these two schools of thought, first consider two levels of risk for a catastrophe:

Some events do not sort neatly between the two categories, but that will not be important for the purposes of this discussion.

In making this distinction, I have avoided using the terms "global catastrophic risk" and "existential risk" because they are sometimes used in different ways, and what counts as a "global catastrophic risk" or an "existential risk" depends on long-term consequences of events that are very hard to predict. For example, the Open Philanthropy Project defines "global catastrophic risks" as "risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction)," some other people who professionally work on existential risk often use the term "global catastrophic risk" to refer to events that kill a substantial fraction of the world's population but do not result in extinction, reserving the term "existential risk" for events that would directly result in extinction (or have other, more obvious long-term consequences). But the more formal definition of "existential risk" is a risk that "threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." That definition is neutral on the question of whether level 1 events are existential risks. A major purpose of this post is to explore the possibility that level 1 events are existential risks.

Now consider the two schools of thought:

I generally associate the first perspective with FHI, MIRI, and many people in the effective altruism community who are focused on the very long-term. For example, Luke Muehlhauser, former Executive Director of MIRI (and now a GiveWell Research Analyst), has argued:

One reason AI may be the most urgent existential risk is that it's more likely for AI (compared to other sources of catastrophic risk) to be a full-blown existential catastrophe (as opposed to a merely billions dead catastrophe). Humans are smart and adaptable; we are already set up for a species-preserving number of humans to survive (e.g. in underground bunkers with stockpiled food, water, and medicine) major catastrophes from nuclear war, superviruses, supervolcano eruption, and many cases of asteroid impact or nanotechnological ecophagy. Machine superintelligences, however, could intelligently seek out and neutralize humans which they (correctly) recognize as threats to the maximal realization of their goals.

In contrast, this post argues for the "dual focus" approach. (While we are arguing that preventing level 1 events is important for maximizing civilization's long-term potential, we are not arguing for the further claim that only level 1 and level 2 events are important for this purpose. See our related discussion of "flow-through effects.")

There are other possible justifications for a "dual focus" approach. For example:

However, this post focuses on the implications of level 1 events for long-term potential independently of these considerations.

The core of the argument is that there is some (highly uncertain) probability that civilization would not fully recover from a level 1 event, and—with the possible exception of AI—that the probability of level 1 events is much greater than the probability of level 2 events. I would guess that when we multiply these probabilities through, the total risk to humanity from level 1 events is in the same rough ballpark as the total risk from level 2 events (again, bracketing risks from AI). If I and others at the Open Philanthropy Project changed our view on this, we might substantially change the way we're approaching global catastrophic risks.

Global catastrophes seem much more likely than extinction events

For almost every class of risk I am aware of, less extreme catastrophes seem far more likely than direct extinction events. This claim is consistent with the Google sheet that we published as part of a March update summarizing our conclusions from investigating multiple possible global catastrophic risks. To consider some examples:

General reasons to think a global disruption might affect the distant future

This section argues that civilization has had unusually rapid progress over the last few hundred years, that the mechanisms of this progress are poorly understood, that we have essentially no experience with level 1 events, and that there is a risk that civilization will not fully recover if a level 1 event occurs.

The world has had unusually positive civilizational progress over the last few hundred years

Humanity has seen unparalleled scientific, technological, and social progress over the last few hundred years. Humanity's scientific and technological progress resulted in the Industrial Revolution, whose consequences for per capita income can be observed in the following chart (from Maddison 2001, pg 42):

original_maddison_chart

The groundwork for this revolution was laid by advances in previous centuries, so a "scientific and technological progress" chart wouldn't look exactly like this "income per person" chart, but I'd guess it would look qualitatively similar (with the upward trajectory beginning somewhat sooner). In terms of social progress, the last few centuries have seen the following, all of which I see as positive progress in terms of utilitarian-type values:

(Note that many of these are discussed in The Better Angels of Our Nature, which Holden read, sought out criticism of, and wrote about here.) I am aware of no other period in human history with comparable progress along these dimensions. I believe there is still significant room for improvement along many of these and other dimensions, which is one reason I see it as very important that this kind of progress continues.

There is little consensus about the mechanisms underlying civilizational progress

There is a large literature studying the causes of the Industrial Revolution and the "Great Divergence"—i.e. the process by which some countries leaped ahead of others (some of which later caught up, and some of which may catch up in the future) in terms of wealth and technological capacity. Explanations of this phenomenon have appealed to a variety of factors, as can be seen from this overview of recent work in economic history on this topic. My understanding is that there is no consensus about which of these factors played an essential role, or even that all potentially essential factors have been identified.

Most of the relevant literature I've found has primarily focused on the question, "Why did the Industrial Revolution happen in Europe and not in Asia?" and has little discussion of the question, "Why was there an Industrial Revolution at all, rather than not?" I spoke with some historians, economists, and economic historians to get a sense of what is known about the second question. David Christian, a historian in the field of "big history," argued that (my non-verbatim summary):

[M]any aspects of the Industrial Revolution were not inevitable, but some extremely general aspects of industrialization (such as humans eventually gaining the ability to use fossil fuels) were essentially inevitable once there was a species capable of transferring substantial amounts of knowledge between people and across generations.

He also pointed to the fact that:

There were four “world zones” in history that didn’t interact with each other very much: Afro-Eurasia, the Americas, Australasia, and the Pacific. In Professor Christian’s view, these world zones developed at different rates along similar trajectories (though he emphasized that other historians might dispute this claim).

However, Professor Christian also cautioned that his view about the inevitability of technological progress was not generally shared by historians, who are often suspicious of claims of historical inevitability. Other economic historians I spoke with argued against the inevitability of the degree of technological progress we've seen. For example, Joel Mokyr argued that (my non-verbatim summary):

A necessary ingredient to the Industrial Revolution in Europe was the development of a certain kind of scientific culture. This culture was unusual in terms of (i) its emphasis on experimentation and willingness to question conventional wisdom and authority, (ii) its ambition (illustrated in the thought of Francis Bacon) to find lawlike explanations for natural events, and (iii) its desire to use scientific discoveries to make useful technological advances.

If not for this unique cultural transformation in Europe, he argues, other countries would never have developed advanced technologies like digital computers, antibiotics, and nuclear reactors.

Though I have not studied the issue deeply, I would guess that there is even less consensus on the inevitability of the social progress described above. It seems that it would be extremely challenging to come to a confident view about the conditions leading to this progress. In the absence of such confidence, it seems possible that a catastrophe of unprecedented severity would disrupt the mechanisms underlying the unique civilizational progress of the last few centuries.

There is essentially no precedent for level 1 catastrophes

Some of the most disruptive past events have carried very large death tolls as a fraction of the world's population, or as a fraction of the population of some region, including:

For the most part, these events don't seem to have placed civilizational progress in jeopardy. However, it could be argued that World War II might have had a less favorable ultimate outcome, with potentially significant consequences for long-term trends in social progress. In addition there are very few past cases of events this extreme, and some potential level 1 events could be even more extreme. Moreover, the Black Death—by far the largest catastrophe on the list relative to population size—took place before the especially rapid progress began, so was less eligible to disrupt it. The remaining events on this list seem to have killed under 5% of the world's population. Thus, past experience can provide little grounds for confidence that the positive trends discussed above would continue in the face of a level 1 event, especially one of unprecedented severity.

In this way, our situation seems analogous to the situation of someone who is caring for a sapling, has very limited experience with saplings, has no mechanistic understanding of how saplings work, and wants to ensure that nothing stops the sapling from becoming a great redwood. It would be hard for them to be confident that the sapling's eventual long-term growth would be unaffected by unprecedented shocks—such as cutting off 40% of its branches or letting it go without water for 20% longer than it ever had before—even taken as given that such shocks wouldn't directly/immediately result in its death. For similar reasons, it seems hard to be confident that humanity's eventual long-term progress would be unaffected by a catastrophe that resulted in hundreds of millions of deaths.

If I believed that sustained scientific, technical and social progress were inevitable features of the world, I would see a weaker connection between the occurrence of level 1 events and the long-term fate of humanity. If I were confident—for example—that a level 1 event would simply "set back the clock" and let civilization replay itself essentially unchanged—then I might believe it would take something like a level 2 event to change civilization's long-term trajectory. But the limited room for understanding the causes of progress over the last few centuries and the world's essentially negligible experience with extreme events do not offer grounds for confidence in that perspective.

Specific mechanisms by which a catastrophe could affect the distant future

In addition to the above general and non-specific reasons to expect that a level 1 event could devastate the future, I can also think of specific mechanisms by which civilizational progress could be disrupted, and mechanisms by which such disruption could be bad for the fate of humanity.

Disruption of sustained scientific and technological progress

Suppose that sustained scientific and technological progress requires all of:

Imagine a level 1 event that disproportionately affected people in areas that are strong in innovative science (of which we believe there are a fairly contained number). Possible consequences of such an event might include a decades-long stall in scientific progress or even an end to scientific culture or institutions and a return to rates of scientific progress comparable to what we see in areas with weaker scientific institutions today or saw in pre-industrial civilization. Speaking very speculatively, this could lead to various failures to realize humanity's long-term potential, including:

Disruption of sustained social progress

As with scientific progress, social progress (in terms of the utilitarian-type value, as discussed above) seems to have been disproportionately concentrated in recent centuries, and its mechanisms remain poorly understood. A global catastrophe could stall—or even reverse—social progress from a utilitarian-type perspective. Once again, especially if the catastrophe disproportionately struck particularly important areas, there could be a stoppage/stall in social progress, or a great decrease in the comparative power of open societies in comparison with authoritarian regimes. This could result in a number of long-term failure scenarios for civilization. For example:

Potential offsetting factors

So far my discussion has focused on how a global catastrophe could negatively affect the fate of humanity. Are there any potential mechanisms by which a global catastrophe could make our future seem brighter from the perspective assumed in this post (prioritizing maximal long-term potential, preferably including space colonization)? In conversations, I have heard the following arguments for why this is possible:

I want to acknowledge these possible objections to our line of reasoning, but do not plan to discuss them deeply. My considered judgment—which would be challenging to formally justify and I acknowledge I have not fully explained here—is that these factors do not outweigh the factors pushing in the opposite direction.

Some reasons for this:

Conclusion and strategic implications

I believe that seeking to prepare for global catastrophes that could result in deaths amounting to a substantial fraction of the world's population may be important for humanity's future because:

As such, I believe that a "dual focus" approach to global catastrophic risk mitigation is appropriate, even when making the operating assumptions of this post (a focus on long-term potential of humanity from a utilitarian-type perspective). Advanced artificial intelligence seems like the most plausible area where an outright extinction event (e.g. due to an extremely powerful agent with misaligned values) may be comparably likely to a less extreme global catastrophe (e.g. due to the use of extremely powerful artificial intelligence to disrupt geopolitics). We plan to discuss this issue further in the future.

Acknowledgements

Thanks to the following people for reviewing a draft of this post and providing thoughtful feedback (this of course does not mean they agree with the post or are responsible for its content): Alexander Berger, Nick Bostrom, Paul Christiano, Owen Cotton-Barratt, Daniel Dewey, Eric Drexler, Holden Karnofsky, Howie Lempel, Luke Muehlhauser, Toby Ord, Anders Sandberg, Carl Shulman, and Helen Toner.