This piece is part 2 in a three-part series about the Great Filter concept, with a particular focus on the risks and challenges facing humanity in the 21st Century. Part 1 introduced the Great Filter idea, and part 3 argues for the necessity of Artificial Super-Intelligence (ASI) to meet those challenges.
Futurists should generally be aware of the phenomenon known as Technological Convergence, in which not only are the functions of multiple devices and platforms integrated into single solutions, but the different technologies also catalyse each others’ development and innovation (the classic example being smart phones, which have subsumed the functions of telephones, cameras, maps, compasses, text and email, translation, web browsing, and any number of other prior technologies).
The ultimate expression of the technological convergence idea is that of a Technological Singularity, being a brief period in which accelerating technological development has reached such a pace (almost certainly due to recursive AI self-modification) that all the “rules” or age-old certainties of human existence are apparently thrown out overnight, and humans can no longer control or fully comprehend “their” civilization. As frightening as that may sound to some people, it is quite possibly the only thing that stands between humanity and complete annihilation. Part 3 in this series will explore that possibility, but first we must make an impassive, rational assessment of the threat humanity faces.
First, we must note that it is not only good developments that can accelerate and converge. We are well aware of a range of dangerous trends in the world, many of which appear to be accelerating and possibly exponential. More importantly (and dangerously), these trends are not independent. In other words, as some risks come closer to being realized they actually bring other risks closer to realization as a consequence, and the various threats increasingly converge upon a single, potentially civilization-killing Threat Function.
Obviously we cannot be exhaustive here, but let’s briefly consider the most significant problematic trends we see in the world today, grouped into five broad categories as follows:
Environmental crisis:
Ecological damage, climate change, food chain collapse
Resource shortages:
Fresh water shortage, food shortage, oil & mineral shortages
Politics & economics:
Dysfunctional political systems & economies, financial market collapse, full economic collapse
Conflict:
Civil disorder, local conventional wars (e.g.1,2,3,4), global CBRN (Chemical, Biological, Radiological, Nuclear) war
Technological threats:
Runaway AI, “Grey Goo” nanotechnology, power plant meltdown / toxic spill
Bayes’ Theorem & Conditional Probability Analysis
Convergent Risk is a slippery, amorphous creature which we humans find very hard to recognize, simply because we have not evolved to think in terms of abstract, aggregate risk, but rather to rapidly react to specific, immediate, discrete risks. In other words, we cannot trust our intuitions when it comes to global risk, because it is effectively invisible to the cognitive architecture that we humans have evolved. To explain the problem, we need to talk a little about statistics.
If you ask the vast majority of people – including politicians and the “leaders” of society – to estimate the probability associated with some risk, they will tend to narrowly focus on their conception of that risk while ignoring anything which does not fit that preconception, including potentially catalytic externalities. In other words, they will focus on what they think they know, and they will significantly under-estimate the risk. How to deal with that problem is beyond the scope of this piece; for now let’s re-align our own perception of global risk:
Statisticians have tools for intelligently analyzing this kind of problem, which is known as “conditional probability” (meaning that the likelihood of a thing happening depends upon the prior likelihood of other things happening). Arguably the most powerful paradigm for understanding conditional probability is Bayes’ Theorem (see an introduction to these concepts here). Bayesian calculators exist online (such as those here and here), and we can use them to test the relationship between our understanding of specific threats, and the emergent, aggregate, global threat. It is far beyond the scope of this piece to go into detailed global threat analysis now – that will be made available in a future Transhumanity.net piece – so for now let’s content ourselves with the takeaway message:
There exist a number of growing threats to human existence. Those threats are interdependent, and the emergent, aggregate, global Threat Function they lead to represents total annihilation of the human race if we can’t find a solution before it comes to fruition. The traditional political approach to such problems is to work on reducing each contributing risk, thereby (in theory, at least) lessening the greater emergent risk; which is an approach that should be pursued, using every advanced technological tool that we can muster.
What must also happen, however, is a concerted push to develop something greater than ourselves – an Artificial Super-Intelligence (ASI) – which can solve the entire problem at a greater level of reliability, effectiveness, and ethical consistency than humans have shown themselves capable of achieving. Some may think this call for ASI unrealistic or dangerous, but the truly unrealistic and dangerous thing is to imagine that humans can just “muddle through” and everything will be OK. Things will not be OK, unless by our efforts we ensure that they are. Part 3 in this series will explore our options in, and obligations to the future.
3 Pingbacks