Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. – Laurence J. Peter
Numerous stories were in the news last week about the proposed Centre for the Study of Existential Risk (CSER), set to open at Cambridge University in 2013.
After decades of movies of computers and robots going awry, who wouldn’t celebrate this as a good thing? As a researcher in artificial general intelligence (AGI) and ethics who agrees that artificial intelligences (AIs) *are* an existential risk, let me raise my hand.
- Will artificial intelligence be the end of us?
- Risk of robot uprising wiping out human race to be studied
- Let’s make sure he WON’T be back! Cambridge to open ‘Terminator centre’ to study threat to humans from artificial intelligence
- Hasta la vista, humanity… Will robots wipe out mankind like Terminators?
A fundamental problem exists in that there are two diametrically-opposed camps regarding artificial intelligence (AI) and safety. The first camp firmly believes that there is a path by which a zero-risk AI can be created and that the only time risk occurs is when we do not stick to this zero-risk path.
Further, this camp is vehemently against any development of AI until the zero-risk path can be researched, fully developed, and then rigidly followed. The inconvenient fact that not developing AI (in a timely fashion) to help mitigate other existential risks is itself likely to lead to a substantially increased existential risk is, conveniently, generally not discussed by this camp.
The second camp believes equal firmly that the so-called zero-risk path proposed by the first camp is actually catastrophically self-contradictory (not to mention socially unacceptable and likely to be sabotaged). This leads them to believe, therefore, that following it actually has a much higher existential risk than virtually all of the other alternatives. This camp also regularly points out that *not* creating artificial intelligence in a timely fashion likely poses a higher total existential risk than creating it via a planned path with some risk.
If there were a calm, rational, scientific dialogue between the two, having both camps would actually be a very beneficial state of affairs. Unfortunately, that is not the case. The Singularity Institute (formerly, the Singularity Institute for Artificial Intelligence or SIAI) ignores, belittles or summarily dismisses any efforts that don’t insist upon 0% risk. Worse, they deceitfully propagandize the public by acting as if the subject is not currently being researched or condemning the researchers for “excluding the public” since the public would *obviously* insist upon 0% risk. The Cambridge co-founders continue this dishonest trend.
* “It tends to be regarded as a flakey concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous. What we’re trying to do is to push it forward in the respectable scientific community.” – Huy Price
* “My core main message is actually that this thing is not science fiction, this thing is not apocalyptic religion – this thing is something that needs serious consideration.” – Jaan Tallinn
* “We need to engage with the wider public, since it is they who decide which applications should be pursued and which doors should be left closed. Science should be part of general intellectual life and political discourse and should not be ghettoized. Scientists must get out of the box and take part in more general discussions.” – Martin Rees
I argue that statements like this, particularly from purported experts, should immediately raise “reality check” questions like:
- After a lifetime of movies, who hasn’t figured out that AI *might* be a risk?
- Why do the speakers feel the need to convince people of a “no-brainer”?
- Who in their right mind wouldn’t want to prevent an avoidable existential risk?
- Do those scientists have some other agenda? Or, do the speakers have one?
In my case, the SIAI’s concerns caught my attention in 2004 and prompted me to join the effort to develop a solution. Diving into the problem, it became quite clear *to me* that their most important, foundational assumptions are fatally contradictory. Obviously, I could be mistaken – but science has methods for sorting this type of thing out. Except that it is equally clear that the Singularity Institute, the Centre for the Study of Existential Risk and others are far more interested in appealing to the court of public opinion and are willing to play serious hardball when doing so.
Pretending that scientists haven’t studied the existential risk of AI is akin to claiming that scientists haven’t studied (and refuted) Intelligent Design. Reducing the existential risk of artificial intelligence is NOT “regarded as a flakey concern” that, inexplicably, no one is researching. Yet, existential risk extremists insist on claiming that current AI researchers are being dangerously irresponsible (with proclamations like SIAI founder Eliezer Yudkowsky’s statement “And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.”). Indeed, this has been done to such an extent that Ben Goertzel reports:
Actually, I’ve had had two separate (explicitly) SIAI-inspired people tell me in the past that “If you seem to be getting too far with your AGI work, someone may have to kill you to avert existential risk.” Details were then explained, regarding how this could be arranged. . . . It does seem plausible to me that, if Scary Idea type rhetoric were amplified further and became more public, it could actually lead to violence against AGI researchers — similar to what we’ve seen in abortion clinics, or against researchers doing experimentation on animals, etc.
I’ve been careful in this article not to get into the details of the zero-risk path (although I promise to do so in subsequent articles — despite having been threatened as well). My point here is simply that the *story* that the Singularity Institute and the CSER co-founders are telling clearly does not make any sense. Scientists aren’t ignoring/avoiding an obvious solution. So why make the claim that they are?
Critically important as well is the unasked question as to whether eliminating existential risk is an overriding goal or whether there are other goals which are equally important. If eliminating existential risk absolutely required the destruction of an alien civilization but allowing them to live introduced only a 0.1% existential risk, would you insist upon destroying them? What if the risk was increased or decreased by several orders of magnitude? Clearly, different individuals would have differing answers and determining the level of acceptable risk should be a social policy issue – yet the extremists insist that the only acceptable level is zero.
Thus, there is not just the scientific problem of reducing existential risk but also the social policy question in terms of what we are *willing* to do to reduce existential risks. The arguably “scientifically correct” path of destroying an alien civilization is also arguably morally reprehensible. Yet, one can also easily imagine fearful individuals demanding their destruction nonetheless. One of my favorite quotes is Phil Goetz eloquently expressing his view (which obviously mirrors my own ):
The fact that you consider only /human/ life to have value – that you would rather condemn the entire universe to being tiled with humans and then stagnating for all eternity, than take any risk of human extinction – that’s the Really Scary Idea.
This is where the notion of “wicked problems” comes in (“wicked” not in the sense of evil but rather in terms of resistance to resolution). Rittel and Weber coined the term in 1973 in the context of problems of social policy saying:
The search for scientific bases for confronting problems of social policy is bound to fail because of the nature of these problems…Policy problems cannot be definitively described. Moreover, in a pluralistic society there is nothing like the indisputable public good; there is no objective definition of equity; policies that respond to social problems cannot be meaningfully correct or false; and it makes no sense to talk about ‘optimal solutions’ to these problems…Even worse, there are no solutions in the sense of definitive answers.
Clearly, existential risk is *NOT* just a difficult scientific problem which might have a solution (or set of solutions) which will allow us to reduce it to zero. It is *NOT* scientists with agendas that are getting in the way of the zero-risk path. There are *SERIOUS* questions as to what we should be willing to do solely to reduce existential risk. Whether or not we should “Choose the zero-risk path” is not easily answered both because it is clearly debatable whether such a path exists AND because we may not be willing to accept the consequences of that path. This is the epitome of the questions that Laurence J. Peter was talking about.
On the other hand, insisting that choosing the zero-risk path is the only rational choice does appear to be a solid strategy by which fearful reductionists can attack scientists in a dangerously foolish attempt to either stop all risk from accelerating technology or attempt to bring it under their personal control. For example, the Lifeboat Foundation was founded by Eric Klien who stated “I believe that world after world throughout the universe has been destroyed by science out of control and therefore there are no advanced aliens out there” and argued “What exactly will be the cause of our demise? It will be the Religion of Science.” He proclaimed “I have developed Lifeboat Foundation with a Trojan Horse meme that tries to wrap our goals in the Religion of Science memes” and explained that “By wrapping our meme with a Religion of Science coating, I hope to develop enough resources that we can make sure that unlike every civilization so far, we can have at least SOME people survive this dangerous religion.” It is worth noting that Cambridge co-founder Jaan Tallinn is on the advisory board (although he could have been duped by Klien as many others have been).
In contrast, Eliezer Yudkowsky, the founder of the Singularity Institute is a fervent advocate in favor of science and accelerating technological change – but only as long as it adheres to what he espouses as “rationality”. He is the architect of the initial version of the zero-risk plan but has not updated the under-specified/problematical parts of that plan in seven years (nor has anyone else) in favor of proselytizing his version of “rationality”. His views about science and scientists (vs. his beliefs) are best conveyed by his own words including the inflammatory conclusion of his “Science Doesn’t Trust Your Rationality”.
Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science… right? So, are you going to believe in faster-than-light quantum “collapse” fairies after all? Or do you think you’re smarter than that?
It is also worth reading his fiction to discover his predictions as to what the likely results of his “rational” policies conceivably could be.
Constantly insisting that choosing the zero-risk path is a “no-brainer” is exactly akin to dishonest politicians, intelligent design proponents and others who constantly use inflammatory and misleading “sound bites” in order to deliberately misinform an unsuspecting public into supporting their own obfuscated agendas. Existential risk, and what we are willing to do to reduce it, clearly is a “wicked problem” that certainly shouldn’t be stealthily hijacked or resolved by threats. The fact that we have yet another organization forming whose founders are already obviously willing to play such games is distressing and depressing rather than a cause for celebration. This is worse than the intelligent design proponents’ “Teach the controversy” since it disingenuously insists that there are few dissenters and no rational dispute to its claims despite clear and easily obtained evidence to the contrary. Please join me in resisting this assault and bringing the true choices to light.
(originally published December 12, 2012)
* hero image used from http://friendbombforfreedom.com/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-ai-seriously-enough/
2 Pingbacks