Posted: Tue, December 25, 2012 | By: Extropia DaSilva
How seriously should we take the Singularity hypothesis?
Going by some of the essays and comments out there on the Web, it would seem that some do not think it should be taken seriously at all. Few who have looked into this ‘Singularity’ stuff are unlikely to miss derogatory comments, such as ‘rapture of the nerds’ or ‘it is all just science fiction’.
But, is it really sensible to be dismissive of the Singularity hypothesis? Is it sheer nonsense, or is it actually possible that we could be heading for a technological Singularity of some kind or other?
What it Means
Well, what do we mean by the term? A technological singularity is defined as ‘the creation, by technology, of greater-than-human intelligence’. Technology works in close collaboration with science, in that the latter creates increasingly fine-tuned explanations of natural phenomena, which are exploited by appropriate combinations of matter and energy in order to harness these natural phenomena in order to do useful work for individuals, groups, societies and civilization. Among other things, technologies include instruments for yet finer observations of natural phenomenon, leading to yet-more powerful technology.
A technological Singularity is based on the premise that general intelligence is an example of a natural phenomenon that can be studied, and understood sufficiently well for technologies to be built that can amplify it beyond the levels reached by natural selection. To say it is impossible can mean one of two things. One is that the human brain is optimal. No artificial brain can ever improve upon it, or if it can be improved the advantage is not noticable enough to qualify. The other is that, yes, forms of general intelligence above and beyond human levels do exist in principle, but we shall never achieve a level of science and technology required to harness this natural phenomenon and perform useful work with it.
It is worth remembering that the technological singularity need not be a nearterm event. Although it is often talked about as being something we should expect within decades, it could happen in a million year’s time, or a billion..in fact, at any time from now until when the universe can no longer perform information processing (about 10^117 years from now). What matters is not that it happens within a certain timeframe, but rather that when it does happen there is a large gap between the mental capabilities of those who are able to integrate the technologies into their lives, and those who are ‘outside’ of such systems.
It might well be the case that we have not created a singularity within a few decades, but is it really plausible that greater-than-human intelligence will remain forever a fantasy? It seems likely that computers will exceed the computational and memory capacity of the human brain, and projects like Blue Brain and Ted Berger’s hippocampus chip are providing proofs of concept that brainlike computers and software can be built (although when a fully brainlike computer will be completed is not something I would like to estimate). Taken together, these suggest that ‘the singularity is impossible’ is an absurdly unlikely suggestion.
Disproving the Singularity
Karl Popper said that a theory is scientific if it can be proved wrong, and so we may ask: If the technological singularity is not happening, how would we know? I argue that online worlds like Second Life (SL) can serve as an indicator that we are either on-track or if the underlying technologies are faltering.
The reasons online worlds can serve this purpose is because many of the enabling technologies of the singularity also serve to push the size and sophistication of online worlds. For example, Vernor Vinge identified improvements in communication as being something that could lead to superhuman intelligence, saying “every time our ability to access information and communicate it to others is improved, we have in some sense achieved an increase over natural intelligence”. Arguably, online worlds are first and foremost platforms for communication. If, as we head toward the future, online worlds enable more people to be online simultaneously, and to exchange knowledge more efficiently (or, better yet, in ways that were not possible in the past), that could be taken as a sign that progress is heading in the right direction.
What Should We Be Watching?
If communication is fundamental to online worlds, what is fundamental to the Singularity? It is important to know, because we do not want to be distracted tracking trends of little or no relevance. Some people think the Singularity is all about mind uploading. In his book, ‘You Are Not A Gadget’, Jaron Lanier wrote, “the singularity… would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new superconsciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living”.
In fact a technological Singularity does not necessitate being uploaded, nor does it require the annihilation of anybody. Admittedly, Singularity enthusiasts are often also uploading enthusiasts and certainly the technologies that would enable one’s mind to be copied into an artificial brain/body would be very useful in enabling a singularity. However, even if we never develop uploading technologies, that in itself would not rule out the possibility of the Singularity happening. Simply put, ‘Singularity equals mind uploading’ is an incorrect definition.
Nor should the Singularity be seen as synonymous with humanlike AI, or any kind of doomsday scenario in which machines take over and drive the human race toward extinction. Again, if we ever find online worlds are being populated with autonomous avatars that anthropologists, psychologists and other experts in human behaviour agree are indistinguishable from avatars controlled by humans, we would very likely have technologies and knowledge that would be useful in bringing a singularity about, but a complete lack of artificial intelligences that can ace the Turing test or any other test of humanlike capability would not rule out the Singularity.
This is not to say that artificial intelligence will play no role in the Singularity, only that it may well unrecognisable as such, because it is not at all humanlike. And, as I argued in my article ‘the fourth transition’, some pathways to Singularity involve a deepening of the co-evolution of humanity and its technology. Not in terms of humans becoming machines (like cyborgs or something like that) but in the sense of a kind of greater-than-human intelligent system, comprised of millions+ of humans working together via massive-scale online collaboration, ‘Data-Intensive Science’ in which non-humanlike artificial intelligences mine vast (petascale and beyond) datasets for patterns undetectable to human minds (while human pattern-recognition extracts knowledge machines struggle to detect) and ‘knowledge-management’ software helps organise our rising tide of data, enabling it to be efficiently searched and shared across specialised fields of expertise. Considered apart, the networks, computers and databases may not be intelligent at all. But superhuman intelligence might exist as an epiphenomenon—a characteristic of the system as a whole.
The Software Complexity Problem
Ok, well, what is essential? What can we point to and say, ‘progress in this area is faltering, therefore we can say the Singularity will not happen’? One such thing would be software complexity. Continual progress in pushing the envelope of computing power can only continue so long as developers can design more sophisticated software tools. The last time a computer was designed entirely by hand was in the 1970s. Since then we have seen orders-of-magnitude increases in the complexity of our computers, and this could only have been achieved by automating more and more aspects of the design process. By the late 1990s, a few cellphone chips were designed almost entirely by machines. The role of humans consisted of setting up the design space and the system itself discovered the most elegant solution using techniques inspired by evolution.
While it is true that we rely on a lot of automation in order to design and manufacture modern integrated circuits, we cannot yet say that humans can be taken completely out of the loop. Computers do not quite design and build themselves. One day, perhaps, one generation of computing technology will design and construct the next generation without any humans involved in the process. But not yet. For now, human ingenuity remains an indispensable aspect.
In an interview with Natasha Vita-More, Vernor Vinge identified a failure to solve the software complexity problem as being the most likely non-catastrophic scenario preventing the Singularity. “That is, we never figure out how to automate the solution of large, distributed problems”. If this were to happen, eventually progress in improving hardware would level off, because software engineers would no longer be able to deliver the necessary tools needed to develop the next generation of computers. With increases in the power of computers leveling off, progress in fields that rely on ever-more powerful computing and IT would continue only for as long as it takes to ‘max out’ the capabilities of the current generation. No further progress would be possible, because we could not progress to new generations of even more powerful IT.
Obviously, online worlds and the closely-related field of videogames rely on ever-more sophisticated software tools and on more powerful computers in order to deliver better graphics, physics simulations, AI and so on. If we compare the capabilities of online worlds and videogames that exist ‘today’ and find it increasingly difficult to point out improvements over previous years’ offerings, that could well be a sign that the software complexity problem is proving insolvable.
We should, however, be aware that some improvements are impossible to see because we have already surpassed the limits of human perception. Graphics is an obvious example. Perhaps one day realtime graphics will reach a fidelity that makes them completely indistinguishable from real life. It might be possible to produce even more capable graphics card, but the human eye would not be able to discern further improvements.
It is a fact that every individual technology can only be improved so far, and that we are closer to reaching the ultimate limits of some technologies than others. Isolated incidents of a technology leveling off might not be symptomatic of the software complexity problem, but if we notice a slowdown in progress across a broad range of technologies that rely on increasingly powerful computers, that would be compelling evidence.
The Default Position of Doubt
Look at Ray Kurzweil’s charts tracking progress in ‘calculations per second per $1,000′ or ‘average transistor price’. Look how smooth progress has been so far. One could be forgiven for thinking the computer industry as so far improved its products with little difficulty.
This is not true, of course. R+D has always faced barriers to further progress. For instance, in the 1970s we were rapidly approaching the limits of the wavelength of light used to sculpt the chips, it was becoming increasingly difficult to deal with the heat the chips generated, and a host of other problems were seen as cause for pessimism by many experienced engineers. Well, with the benefit of hindsight we know More’s Law did not come to an end. Rather, human ingenuity found solutions to all these problems.
According to Hans Moravec, doubt over further progress is the norm within R+D:
“The engineers directly involved in making integrated circuits tend to be pessimistic about further progress, because they can see all the problems with the current approaches, but are too busy to pay much attention to the far-out alternatives in research labs. As long as conventional approaches continue to be improved, the radical alternatives don’t stand a competitive chance. But, as soon as progess in conventional techniques falter, radical alternatives jump ahead and start a new cycle of refinement”.
Similarly, residents of SL tend to be pessimistic about their online world. It never seems to be good enough. Of course, in its current state SL does have many faults that prevent it from being fast, easy and fun and if we do not have the skills to improve these deficiencies, we might as well declare the Singularity impossible right now. But, I would suggest that online worlds are doomed to remain ‘not quite good enough’, because what people can imagine doing will always be more ambitious than what the technologies of the day can deliver. That, after all, is why we continually strive to produce better technology. Yes, there may come a time when online worlds are advanced enough to allow anyone to easily do activities that are typical today, but all the knowledge and technology that gets us to this point will broaden our horizons and people will be complaining about not being able to easily perform feats we would could not even imagine doing today.
What the Singularity Can Never Accomplish
At any point in time, the path to further progress seems blocked by no end of problems. It is probably true, therefore, that at any time there were skeptical voices expressing doubt over substantial improvements over current technologies. For some, the Technological Singularity has come to take on an almost mythical status of some deus ex machina that will arrive and solve all our problems for us. If, by ‘problems’, we mean only material concerns, perhaps a combination of advanced AI and nanosystems could elevate all people to a high standard of living- so high, perhaps, that their society would seem like paradise on Earth compared to how many live today (just as many today live lives of utter luxury compared to people of days gone by). However, it is a fact that any technology will create problems as well as solve them. That is another reason why we continually strive to invent new things- to solve the problems caused by previous generations of inventions!
If you expect the Singularity to rid us of all problems, it will never manifest itself because such a utopian outcome is beyond the capability of any technologically-based system. But if we cannot use the eradication of all problems as the measure by which we judge the Singularity’s presence, what can we use?
And What It Can
One way to answer this is to consider again what an inability to solve the software complexity problem would mean. Vinge saw this developing into “a state in communication and cultural self-knowledge where artists can see almost anything any human has done before- and can’t do any better”. With invention itself unable to grow beyond the boundaries the software complexity problem imposes, we would (in Vinge’s words) “be left to refine and cross-pollinate what has already been refined and cross-pollinated…art will segue into a kind of noise [which is] a symptom of complete knowledge- where knowers are forever trapped within the human compass”. Novelty, true novelty, would be a thing of the past. Whatever came along in the future, we would have seen it all before and would be hard-pressed to discern any improvement.
But, if the Singularity does happen- if technological evolution can advance to a point where we create and/or become posthuman- there would be a veritable Cambrian explosion of creativity and novelty. We tend to be rather human-centric when thinking about intelligence, imagining the spectrum runs from ‘village idiot’ to ‘Leonardo da Vinci’. But Darwin’s theory tells us that other species are our evolutionary cousins, connected to us by smooth gradients of extinct common ancestors. These facts tell us that the spectrum of intelligence must stretch beyond the ‘village idiot’ point towards progressively less intelligent minds, all the way down to a point where the term ‘intelligence’ (or even ‘mind’ or ‘brain’) is not applicable at all. Think bacteria or viruses.
And what about the other direction? The Singularity is based on two assumptions: That intelligences fundamentally more capable than human intelligence (however you care to define it) is theoretically possible, and that with appropriate application of science and technology we (and/ or our technologies) will have minds as far above our own as ours are above the rest of the animal kingdom.
Not everyone accepts these assumptions. Some argue that our minds are too mysterious and complex for us to fully understand, and how can we fundamentally improve something we don’t fully understand? Others see ‘spritual machines’ as requiring an impoverished view of ‘spirituality’ rather than a transcendent view of ‘machines’. But, let us assume that people like Hugo de Garis are correct and that miniaturization and self-assembly will progress to a point where we can store and process information on individual molecues (or even atoms) and build complex three-dimensional patterns out of molecules. When you consider how a single drop of water contains more molecules than all the transistors in all the computer chips ever manufactured, you get some idea of how staggeringly powerful even a sugar-cube sized 3D molecular circuitry would be- even before that individual nanocomputer starts communicating with the gazillions of others throughout the environments of the world.
Let’s also assume that our efforts to understand what the brain is and how it functions eventually results in a fully reverse-engineered blueprint of the salient details of operation. We combine the two: gazillions of nanocomputers, each sugar-cubed sized object capable of processing the equivilent of (at least) one hundred million human brains, running whole brain emulations at electronic speeds (millions of times faster than the speed at which biological neurons communicate). Of course, it has the capacity to run millions of such emulations, perhaps networked together and able to take advantage of a computer’s superiority in data sharing, data mining and data retrieval. Millions of human-level agents networked together to make one posthuman mind. And there are gazillions of these minds making up whatever the world wide web has become.
Then what? Vinge said that, “before the invention of writing, almost every insight was happening for the first time ( at least to the knowledge of the small groups of humans involved)”. Isolated tribes and nations may well have come up with inventions they consider to be completely new, unaware that other nations got there first. As information and communication technologies were invented and improved, so did our ability to recall the past until (in Vinge’s words) “in our era, almost everything we do in the arts is done with an awareness of what has been done before”.
The Technological Singularity will usher in a posthuman era that (from our perspective) will have a profound sense of novelty about it. If we are on-track for a Singularity, it should become increasingly common to encounter things we never imagined would be possible. We will commonly encounter artworks that we struggle to fit into existing categories- truly extraordinary renderings of a fundamentally superior mind.
Ushering in the posthuman world has always been what Second Life was conceived for, at least in the mind of its founder, Philip Rosedale. It never was about recreating the real-world experience of shopping malls and fashionably-slim young ladies. It was a dream that a combination of computing, communication, software tools and human imagination can be coordinated to achieve a transcendent state that is greater than the sum of its parts. ‘There are those who will say…God has to breathe life into SL for it to be magical and real’, Rosedale is on record as saying in Wagner James Au’s book ‘The Making Of Second Life’. ‘My answer to that is…it’ll breathe by itself, if it’s big enough…simply the fact that if the system is big enough and has enough complexity, it will emerge with all these properties”.
Reading or listening to critics trying to debunk the Singularity hypothesis or dismiss it as absurd fantasy, I often find that such people are not discussing the technological singularity itself, but rather the hopes and dreams and fears that some people have built up around it. Some hope to become immortal, some anticipate the end of money, some fear the robots overlords will exterminate us all. But none of these possibilities should be mistaken for the Singularity itself, merely possible consequences. Think of what we can now achieve with the scitech we have, and consider that these are the building blocks for future generations of technologies and scientific knowledge that could vastly amplify our abilities and enable us to do things we can scarcely imagine today.
When you do that, this Technological Singularity stuff sounds a lot more plausible.