Accelerating and converging trends of various types allow us to know one thing about the 21st Century with complete certainty: That it will not be business as usual. One way or another, life as you know it now will not continue. For things to stay even more-or-less as they are now would, frankly, be nothing short of utterly bizarre given the array of world-changing trends currently building up a head of steam.

The real question is whether the approaching torrent of change will be good or bad, in what ways, and if there is anything we can do to improve the odds of a positive outcome. In order to even begin approaching these related questions, we must first acquaint ourselves with the concepts of Technological Singularity and Convergent Risk.

The Singularity

Readers of Transhumanity.net are almost certainly familiar with the concept of Technological Singularity, which is the idea that developments in Artificial Intelligence (AI) technologies will lead to a runaway process of “intelligence explosion” whereby intelligent machines increasingly develop their own successors at an exponentially accelerating pace. The upshot, most elegantly articulated by Ray Kurzweil, is that almost everything that humans understand about their own lives and societies will change within a remarkably short period of time, and soon.

Kurzweil expects the Singularity to occur around the mid-2040s, and for it to be a positive thing, culminating in the birth of a post-biological, hybrid AI-human civilization. Kurzweil’s critics often point to what they perceive as problems with this prediction, but they are also often not aware of the existing counter-arguments. For example, critics point to the collapse of Moore’s Law in recent years, but Kurzweil and others predicted that, and pointed to the fact that exponential change paradigms are themselves replaced every few years, as part of a larger “Law of Accelerating Returns“. In short, Moore’s Law was just one stepping stone in a longer path that we are still travelling on.

It has also been asked whether Singularity is necessarily a good thing, to the point that AI researchers themselves frequently depict the possibility as a potential Existential Risk to humanity. I believe that such a risk most definitely exists, but whether Technological Singularity would be a good or bad thing is still an open question. The answer to that question depends upon what type of Singularity occurs, how well prepared we are for it, and so on. In short, it depends to some extent on our actions. The main thing here is that Technological Singularity also has incredible potential to be not only a good thing, but to be a world-saving paradigm shift, allowing our civilization to solve previously intractable, deadly, global problems.

So now let’s put the Singularity and its extraordinary potential for good to one side, and look at the other side of this equation: Convergent Risk.

Convergent Risk

Where Technological Singularity has the potential to be a very good thing indeed, there are a number of other accelerating global trends which only point toward increasingly bad outcomes. Putting aside rare and hard to quantify “Black Swan” events (e.g. earth being hit by an asteroid), the obvious major categories of trend are as follows:

  • Economic crisis & collapse
  • Civil unrest & war
  • International conflict, including thermonuclear exchange
  • Resource depletion
  • Environmental damage & climate change

Even the most cursory glance at this list reveals a simple but disturbing reality: These threats are not independent. They are not even merely interrelated. They are convergent.

In other words, when an event occurs within any given category, it increases the chance of more events occurring in the same and other categories, and bigger ‘cascading’ crises become more and more likely over time (unless something is done to fix the underlying problems).

Economic crisis contributes to civil unrest, civil unrest contributes to deepening social problems and creates the risk of war. Conventional wars increase the risk of thermonuclear exchange. Resource depletion and environmental problems cause social and economic problems while adding to global tensions. Wars and other problems both contribute to environmental damage directly, and get in the way of solving extant environmental problems. And so on.

I believe that one could create a model of such Convergent Risk, perhaps using sets of nested probabilities along the lines of Bayes’ Theorem. As we learned more about the details of any given risk we could update and deepen the model. The key point to understand, however, is that humanity is fast headed toward some kind of deep paradigm shift, and if we can’t ensure it is a good one, then it will be very bad indeed. Taking our hands off the steering wheel and hoping for the best is not a solution.

Singularity, Convergent Risk, or both?

If History teaches us anything, it is that things are never quite as simple as we humans like to imagine. Situations are never wholly good or bad, even from a single point of view. We should accordingly expect to see both disastrous events unfolding and miraculous new science/technology breakthroughs offering us new ways to negotiate the path forward. Our responsibility is to watch developments closely, and to do our best to leverage them toward positive outcomes. Human survival depends upon it.