This piece is part 3 in a three-part series about the Great Filter concept, with a particular focus on the risks and challenges facing humanity in the 21st Century. Part 1 introduced the Great Filter idea, and part 2 reviewed the emerging global risks which may be candidates for humanity’s own experience of the Filter. It appears that humanity must transcend its current challenges or be destroyed by them, so part 3 will now examine the question of what “transcendence” means, and what it would require.
Never Trust A False Dichotomy
It would be a mistake to assume that there is a hard distinction between extinction and survival – even flourishing – at the level of the total human population. Positive and negative Great Filter scenarios are not mutually exclusive. In other words, there is an entire spectrum of scenarios between “everybody dies” and “everybody lives (in some way that is effectively invisible to extraterrestrials)”. It is quite plausible that as global threats converge toward an extinction event, and some proportion of human civilization is destroyed, then the remainder could still survive and even thrive with the help of exponentially advancing technologies.
Thus two Great Filter scenarios would have simultaneously come to pass, with one part of humanity falling silent in a way we naturally hope to avoid, and another part falling silent through achieving vastly greater control over our physical circumstances, technologically transcending both the global crisis and the historical constraints of the observable universe we inhabit. Obviously we would vastly prefer that everybody live over any alternative, and must work to maximize the degree to which humanity survives and thrives, but the point is that positive outcomes do not come about “automagically”. We must plan extremely well, and work extremely hard to achieve them. So, what can we do to maximize our chances of survival?
What is ASI?
Artificial Intelligence (AI) technologies may be considered to exist on a spectrum of “strength”, or completeness. “Weak” or “narrow” AI is the traditional paradigm, which focusses on software dedicated to solving particular problems which would require intelligence of a human being, such as playing chess or other games, or diagnosing diseases. “Strong” AI, also known as Artificial General Intelligence (AGI), is able to intelligently handle all tasks that can be completed by humans, at approximately the same level of ability as humans. An AGI which has developed general cognitive abilities beyond human capacity in most or all tasks is known as an ASI, or Artificial Super-Intelligence.
There is a distinction to be made between intelligence (as defined by AI researchers) and conscious, phenomenological awareness, but here we will make the reasonable assumption that an AGI would have to at least arguably be conscious in order to count as such a thing (since conscious examination of one’s own mental contents is a defining feature of human General Intelligence). It is hard to credit the idea of an entirely non-conscious / non-sentient ASI, as whatever incredible powers it might have, it wouldn’t have any capacity for the one thing that defines a human mind as such: Subjective experience. A powerful machine learning system may well be able to solve many problems, and may even be considered superhuman in some “alien AI” sense, but without subjective awareness it would be subhuman in at least one very important sense, and AGI is invariably defined in terms of human-equivalence.
Why is ASI important?
We have already established the importance of advanced technology to our collective survival as global crises deepen. With the latest tools at our disposal we would not only have a chance of finding solutions to potentially deadly problems, but we would also have a chance of defending our friends and future from anyone else acting in a dangerously selfish or misguided manner. ASI is by definition the ultimate technological tool, commonly referred to as humanity’s “final invention”, because it would have the power to recursively design and create new, increasingly advanced generations of itself, rapidly becoming complex beyond human comprehension.
It is hard for humans to grasp what unbridled intelligence might be capable of… to understand the ways in which it might be able to radically remake our world. What we are talking about here is a mind – or an ecosystem of minds – that dwarfs humanity’s collective intelligence, and which sees easy solutions where we can only see life-threatening, intractably complex problems. The possibilities truly are breathtaking, if you take just a little time to think them through. The opening chapter of Max Tegmark’s book “Life 3.0” does a good job of illustrating how ASI could more or less sidestep mundane, human political-economic obstacles to a better future (no matter how insurmountable they seem to humans), just as its opening overture… from there, the symphony proper could rapidly make every human dream come true, and every human concern a matter for the history books.
Fundamentally, however, the issue is much simpler than any idealistic aspiration: We want to live. The logic of the Great Filter and the growing threats faced by humanity together make it clear that a change is coming, and unless we can harness the power of the greatest technologies available, then the chances of our survival will be lower than they could otherwise have been. Furthermore, if those technologies fall into the hands of others who have no concern for our safety or wellbeing, then our chances of survival rapidly drop to zero.
As the world accelerates, both toward good and ill, and with an uncertain outcome, then we must embrace the power of ASI to survive and thrive. Where that is an option, half-measures and vacillation only heighten the risk. Under those circumstances, ASI is necessary. ASI is survival. If we wish to survive into the future, then we must embrace it wholeheartedly.
To summarize this three-part series:
[1] The Great Filter logic tells us that unless humanity is the very first technological civilization to appear in the observable universe – or we’re simply not good enough at looking for obvious signs of life – then something happens to all advanced civilizations which makes them apparently disappear.
[2] Looking at humanity’s own situation in the 21st Century, we can clearly see both good and bad things which could make our civilization apparently disappear overnight. On the bad side, global threats are converging at an accelerating pace, coalescing into a single Threat Function, representing the extinction of humanity and perhaps all life on the planet. On the good side, exponentially advancing technology could save humanity and heal the planet, transcending the observable universe in the process.
[3] ASI is survival. If anyone survives and thrives in the post-convergent-threat environment, it will be those who adopt and control – who have merged with – the most advanced technological tools. Opposition to, or half-hearted acceptance of this agenda will only reduce your chances of survival. The part of humanity that survives and thrives will be that which acknowledges that there is no standing still; We must move forward resolutely, or fall back into the abyss. That is the essence of the Great Filter.
June 7, 2018 at 12:15 pm
So true Amon. But I still feel strongly that the ASI has to have us as an integral part… A bit like the layering of the human brain. Build it around a human cortex. Otherwise its loyalty to organic life and our genetic imperatives cannot be guaranteed. The destruction of organic life cannot be an option.
June 7, 2018 at 9:14 pm
I couldn’t agree more, Robert. I did allude to that in the article (with a mention of “merging with” ASI, linked to the Wikipedia Transhumanism page), but it was a subtle thing – probably too subtle – as I saw that as a side issue for the purposes of this article. As you say though, it is utterly critical. If we get left behind, then there really is no point.