Is the survival and flourishing of humanity important to you?
If so, you should read on.
Humanity now faces the greatest period of intense promise and risk that it has ever known.
Over lunch one day in 1950, physicist Enrico Fermi asked (as did rocket Konstantin Tsiolkovsky before him, in 1933) “Where are they?”, meaning why don’t we have any reliable evidence for the existence of aliens? His logic, now commonly referred to as the Fermi Paradox, was that based on the apparent statistical likelihood of earth-like planets in the universe, the time it has taken our species to evolve and the age of the universe, we should see an abundance of evidence for advanced alien life, and yet we do not. We will examine the logic in a moment, but first we must note the uncomfortable implication that there exists a “Great Filter” which stops most (if not all) intelligent life from developing far enough to make its presence known throughout the observable universe.
Let’s take a few steps back, for a moment, and think the argument through. Basically Fermi was taking into account multiple known facts about our universe, combining them in a certain way, and then asking where all the aliens are. Certain (relatively obscure) objections to the logic exist, but the most powerful argument comes from theorists such as Ray Kurzweil and Nick Bostrom, who have both referenced the Drake Equation to say that perhaps it is not such a mystery, because either we are the first species in the observable universe to develop signal-broadcast technologies, or that humanity simply hasn’t yet searched enough of the observable universe for the right kind of signal.
Long story short, the Drake Equation is a formula which considers multiple cosmological factors, such as how long it takes for a star to form in our galaxy, how many stars have planets, how many planets might support life, how many actually support life, how much of that life is technologically advanced, and how long any given civilization might produce signals we could detect. The point here is that every step in that equation is an estimate (some estimates being more reliable than others), and when you combine the fact that there’s a lot of sky for SETI programs to scan we are left with the inescapable sense that we could easily have missed something.
All that said, the basic underlying intuition is expressed neatly in Fermi’s question: Where is everybody? There is no shortage of idealistic plans for humanity to grow beyond the planet of its birth, and we generally have a sense that any post-human, post-terrestrial civilization would be loud and proud… so why aren’t we seeing larger-than-life signs that others haven’t done this before us? One possibility is of course that we are the first technological civilization in the observable universe, but the likelihood of that seems rather remote, given the size and age of the universe. Ray Kurzweil’s analysis of the Drake Equation suggests that this may be the case, but as a matter of common sense it would behoove us to take the alternatives seriously.
We will now very briefly consider two further possibilities (beyond us being first, or other civilizations existing or having existed beyond our ability to detect them in some mundane, purely statistical way): [1] That the aliens aren’t here because something good happened for all of them, and [2] the aliens aren’t here because something bad happened to all of them.
So what good things might have happened to prior galactic civilizations, that all evidence of their existence is now completely invisible to us? There are multiple versions of this story, naturally, but the basic idea is Transcendence: That advanced alien civilizations have a (very strong) tendency to effectively disappear, perhaps into something like virtual environments or by creating black holes (artificial gravitational singularities) which offer computational efficiency and thus new vistas for exploration beyond anything that regular spacetime can reasonably offer. Perhaps they find some exit route from the observable universe, altogether. We can’t know – not now, at least – but we can safely file this kind of conjecture under the heading of “hope for the best”.
And if we hope for the best, what follows? Indeed, we must also prepare for the worst. The “worst”, in this context, is a vast category of scenarios, all of which involve advanced civilizations inevitably coming to some sticky end before they can make their presence known throughout the galaxy in any detectable way. Just name your preferred notion of the most likely thing to kill our civilization before it can reach an advanced spacefaring phase, and that will fit the bill. Unfortunately, there is no shortage of potential threats to humanity, and those risks are expanding and converging in exactly the same way that our technological prowess is developing. As Ray Kurzweil has noted, such processes are exponential rather than linear, and so they will probably come to fruition (i.e. spectacularly good and/or bad things coming to pass) in an anti-intuitive flash, with little warning, after the longest period of things apparently staying more or less as they’ve always been…
The next two articles in this series further explore the implications of the Great Filter idea. Specifically, part 2 looks at specific risks and challenges (and their associated likelihoods) that we face over the coming decades, and part 3 makes the argument that human survival and the fulfillment of human potential rely upon the development of Artificial Super-Intelligence (ASI).
4 Pingbacks