Transhumanity
where strange brilliant ideas for the future intermingle and breed…

Home > Articles > The “Wicked Problem” of Existential Risk with AI (Artificial Intelligence)

The “Wicked Problem” of Existential Risk with AI (Artificial Intelligence)

Posted: Sat, December 08, 2012 | By: Mark Waser



Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. – Laurence J. Peter

Numerous stories were in the news last week about the proposed Centre for the Study of Existential Risk (CSER), set to open at Cambridge University in 2013. 

After decades of movies of computers and robots going awry, who wouldn’t celebrate this as a good thing?  As a researcher in artificial general intelligence (AGI) and ethics who agrees that artificial intelligences (AIs) *are* an existential risk, let me raise my hand. 

A fundamental problem exists in that there are two diametrically-opposed camps regarding artificial intelligence (AI) and safety.  The first camp firmly believes that there is a path by which a zero-risk AI can be created and that the only time risk occurs is when we do not stick to this zero-risk path.  

Further, this camp is vehemently against any development of AI until the zero-risk path can be researched, fully developed, and then rigidly followed.  The inconvenient fact that not developing AI (in a timely fashion) to help mitigate other existential risks is itself likely to lead to a substantially increased existential risk is, conveniently, generally not discussed by this camp.

The second camp believes equal firmly that the so-called zero-risk path proposed by the first camp is actually catastrophically self-contradictory (not to mention socially unacceptable and likely to be sabotaged).  This leads them to believe, therefore, that following it actually has a much higher existential risk than virtually all of the other alternatives.  This camp also regularly points out that *not* creating artificial intelligence in a timely fashion likely poses a higher total existential risk than creating it via a planned path with some risk.  

If there were a calm, rational, scientific dialogue between the two, having both camps would actually be a very beneficial state of affairs.  Unfortunately, that is not the case.  The Singularity Institute (formerly, the Singularity Institute for Artificial Intelligence or SIAI) ignores, belittles or summarily dismisses any efforts that don’t insist upon 0% risk.  Worse, they deceitfully propagandize the public by acting as if the subject is not currently being researched or condemning the researchers for “excluding the public” since the public would *obviously* insist upon 0% risk.  The Cambridge co-founders continue this dishonest trend.

*  “It tends to be regarded as a flakey concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous. What we’re trying to do is to push it forward in the respectable scientific community.”  – Huy Price

* “My core main message is actually that this thing is not science fiction, this thing is not apocalyptic religion - this thing is something that needs serious consideration.”  – Jaan Tallinn

* “We need to engage with the wider public, since it is they who decide which applications should be pursued and which doors should be left closed.  Science should be part of general intellectual life and political discourse and should not be ghettoized. Scientists must get out of the box and take part in more general discussions.” – Martin Rees

 

I argue that statements like this, particularly from purported experts, should immediately raise “reality check” questions like:  

* After a lifetime of movies, who hasn’t figured out that AI *might* be a risk?

* Why do the speakers feel the need to convince people of a “no-brainer”?  

* Who in their right mind wouldn’t want to prevent an avoidable existential risk?  

* Do those scientists have some other agenda?  Or, do the speakers have one?

 

In my case, the SIAI’s concerns caught my attention in 2004 and prompted me to join the effort to develop a solution.  Diving into the problem, it became quite clear *to me* that their most important, foundational assumptions are fatally contradictory.  Obviously, I could be mistaken – but science has methods for sorting this type of thing out.  Except that it is equally clear that the Singularity Institute, the Centre for the Study of Existential Risk and others are far more interested in appealing to the court of public opinion and are willing to play serious hardball when doing so.  

Pretending that scientists haven’t studied the existential risk of AI is akin to claiming that scientists haven’t studied (and refuted) Intelligent Design.  Reducing the existential risk of artificial intelligence is NOT “regarded as a flakey concern” that, inexplicably, no one is researching.  Yet, existential risk extremists insist on claiming that current AI researchers are being dangerously irresponsible (with proclamations like SIAI founder Eliezer Yudkowsky’s statement “And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.”).  Indeed, this has been done to such an extent that Ben Goertzel reports:

Actually, I’ve had had two separate (explicitly) SIAI-inspired people tell me in the past that “If you seem to be getting too far with your AGI work, someone may have to kill you to avert existential risk.” Details were then explained, regarding how this could be arranged. . . . It does seem plausible to me that, if Scary Idea type rhetoric were amplified further and became more public, it could actually lead to violence against AGI researchers — similar to what we’ve seen in abortion clinics, or against researchers doing experimentation on animals, etc.

I’ve been careful in this article not to get into the details of the zero-risk path (although I promise to do so in subsequent articles — despite having been threatened as well).  My point here is simply that the *story* that the Singularity Institute and the CSER co-founders are telling clearly does not make any sense.  Scientists aren’t ignoring/avoiding an obvious solution.  So why make the claim that they are?

Critically important as well is the unasked question as to whether eliminating existential risk is an overriding goal or whether there are other goals which are equally important.  If eliminating existential risk absolutely required the destruction of an alien civilization but allowing them to live introduced only a 0.1% existential risk, would you insist upon destroying them?  What if the risk was increased or decreased by several orders of magnitude?  Clearly, different individuals would have differing answers and determining the level of acceptable risk should be a social policy issue – yet the extremists insist that the only acceptable level is zero.  

Thus, there is not just the scientific problem of reducing existential risk but also the social policy question in terms of what we are *willing* to do to reduce existential risks.  The arguably “scientifically correct” path of destroying an alien civilization is also arguably morally reprehensible.  Yet, one can also easily imagine fearful individuals demanding their destruction nonetheless.  One of my favorite quotes is Phil Goetz eloquently expressing his view (which obviously mirrors my own ):

The fact that you consider only /human/ life to have value – that you would rather condemn the entire universe to being tiled with humans and then stagnating for all eternity, than take any risk of human extinction – that’s the Really Scary Idea.

This is where the notion of “wicked problems” comes in (“wicked” not in the sense of evil but rather in terms of resistance to resolution).  Rittel and Weber coined the term in 1973 in the context of problems of social policy saying:

The search for scientific bases for confronting problems of social policy is bound to fail because of the nature of these problems…Policy problems cannot be definitively described. Moreover, in a pluralistic society there is nothing like the indisputable public good; there is no objective definition of equity; policies that respond to social problems cannot be meaningfully correct or false; and it makes no sense to talk about ‘optimal solutions’ to these problems…Even worse, there are no solutions in the sense of definitive answers.

Clearly, existential risk is *NOT* just a difficult scientific problem which might have a solution (or set of solutions) which will allow us to reduce it to zero.  It is *NOT* scientists with agendas that are getting in the way of the zero-risk path.  There are *SERIOUS* questions as to what we should be willing to do solely to reduce existential risk.  Whether or not we should “Choose the zero-risk path” is not easily answered both because it is clearly debatable whether such a path exists AND because we may not be willing to accept the consequences of that path.  This is the epitome of the questions that Laurence J. Peter was talking about.

On the other hand, insisting that choosing the zero-risk path is the only rational choice does appear to be a solid strategy by which fearful reductionists can attack scientists in a dangerously foolish attempt to either stop all risk from accelerating technology or attempt to bring it under their personal control.  For example, the Lifeboat Foundation was founded by Eric Klien who stated “I believe that world after world throughout the universe has been destroyed by science out of control and therefore there are no advanced aliens out there” and argued “What exactly will be the cause of our demise? It will be the Religion of Science.”  He proclaimed “I have developed Lifeboat Foundation with a Trojan Horse meme that tries to wrap our goals in the Religion of Science memes” and explained that “By wrapping our meme with a Religion of Science coating, I hope to develop enough resources that we can make sure that unlike every civilization so far, we can have at least SOME people survive this dangerous religion.”  It is worth noting that Cambridge co-founder Jaan Tallinn is on the advisory board (although he could have been duped by Klien as many others have been).

In contrast, Eliezer Yudkowsky, the founder of the Singularity Institute is a fervent advocate in favor of science and accelerating technological change – but only as long as it adheres to what he espouses as “rationality”.  He is the architect of the initial version of the zero-risk plan but has not updated the under-specified/problematical parts of that plan in seven years (nor has anyone else) in favor of proselytizing his version of “rationality”.  His views about science and scientists (vs. his beliefs) are best conveyed by his own words including the inflammatory conclusion of his “Science Doesn’t Trust Your Rationality”.

Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction.  After all, if it was that simple, we wouldn’t need a social process of science… right?  So, are you going to believe in faster-than-light quantum “collapse” fairies after all?  Or do you think you’re smarter than that?

It is also worth reading his fiction to discover his predictions as to what the likely results of his “rational” policies conceivably could be.

Constantly insisting that choosing the zero-risk path is a “no-brainer” is exactly akin to dishonest politicians, intelligent design proponents and others who constantly use inflammatory and misleading “sound bites” in order to deliberately misinform an unsuspecting public into supporting their own obfuscated agendas.  Existential risk, and what we are willing to do to reduce it, clearly is a “wicked problem” that certainly shouldn’t be stealthily hijacked or resolved by threats.  The fact that we have yet another organization forming whose founders are already obviously willing to play such games is distressing and depressing rather than a cause for celebration.  This is worse than the intelligent design proponents’ “Teach the controversy” since it disingenuously insists that there are few dissenters and no rational dispute to its claims despite clear and easily obtained evidence to the contrary.  Please join me in resisting this assault and bringing the true choices to light.



Comments:

You raise interesting questions with your critique, and I have 2 of my own for you. There must be a more complex story behind your own efforts to help with SIAI work which I’m curious about, and I wonder if the camp you identify with now is doing something concrete. Is there any organized group or body of work that represents the other camp you mention that is working on an “acceptable existential risk” approach to safe enough AI? I’m not familiar with any. Perhaps the only people willing to tackle a problem seem extreme because they’re the ones who take it seriously enough to work on it, and the “moderates” aren’t motivated enough to make sustained organized efforts?

By Alex Peake on Dec 04, 2012 at 2:32pm

Excellent points and questions, Alex.  The story behind my efforts to help with SIAI is not complex at all.  They disagree with my positions and want nothing to do with me.  If I were some random crank, that probably would be their best option. 

On the other hand, my arguments have been passing the necessary peer-review to be presented at professional conferences like Artificial General Intelligence (2009, 2010, and 2011), Biologically Inspired Cognitive Architectures (2008, 2009, 2010, 2011, 2012) , the Singularity Track at ECAP 2010, etc. and have been very well-received.  The one example of our limited interaction that can easily be seen (from the one time they *did* feel the need to interact with me after my presentation at AGI2011) is available at http://vimeo.com/channels/agi10#15504215 (my original presentation) and http://vimeo.com/channels/agi10#20744085 (Roko Mijic’s presentation, my reply to a question, and then a debate between the two of us).

The “camp I now identify with” has just recently started to get organized as a single entity (http://digitalwisdominstitute.org). In the meantime, my personal body of work is accessible through http://becominggaia.wordpress.com/papers/ (except I need to get my November BICA 2012 presentation posted).  Hopefully that link will make me look “extreme” to you because I do take it seriously enough to spend fairly large amounts of time and money on it without being paid.

On the other hand, the SIAI has not produced anything to advance their zero-risk path research since Eliezer Yudkowsky’s 2004 Collective Extrapolated Volition paper (http://singularity.org/files/CEV.pdf).  That is a HUGE problem since all AGI research is supposed to wait upon their research.  They *have* published a lot since then—just all of it hammering the existential risk point and none of it extending their proposed solution.

Finally, my personal history and details really shouldn’t be relevant unless they caused me to introduce inaccuracies or a horribly unfair slant into the article above.  As I freely acknowledge, I could be horribly wrong in all of my arguments.  But even if I am wrong, the facts and quotes that I point to (which can all be verified) are *extremely* problematical.

By Mark Waser on Dec 04, 2012 at 4:16pm

I think that so long as an A.I was merged with a human brain you could mitigate a great deal of the risk by starting off from a “friendly” place with the A.I esp if it was an extension of you. However spontaneously generated strong A.I’s seem to me to hold a high chance of going rogue. That’s assuming we would be of any interest to them at all. But nothing is going to slow down the race to a strong A.I and as computer power increases the chance of some random person making one in their garage grows in likely hood. I think we will end up with swarms of them some “good’ some “bad” and some indifferent.

By Elliott on Dec 04, 2012 at 4:59pm

Why is there no mention of the obvioius risks of using electricity,or modern vehicular transportaation etc. If zero risk would have been insisted upon before the adaptation of these inventions we would still be riding horses and buggies and using candle light. The safety measures now in place were developed as the technology developed. I don’t think that progress will be stopped for a zero risk solution to often unforseeable risks… nor should it be.

By Mike Casdi on Dec 04, 2012 at 11:55pm

Ironically Mark the biggest risk, I’d say the only risk, comes from those who fear there is a risk. The risk is so insignificant it is not even worth considering. You don’t even need to program intelligent AGIs to avoid existential risk, you simply need to create intelligence. The risk is created by fearing the risk.

It is ludicrous that you have been threatened, it highlights the irrationality of the people who excessively fear risk. The type of fascists who make such threats are the real danger because they represent tyranny. Tyranny causes the biggest risks to occur.

Bravo, great article. There are some minor points I disagree with but on the whole it is excellent. It is so frustrating trying to counter the mainstream pessimists who have a fearful paranoid view of the future. They tend to have greater funding thus media access, which presents a distorted view of Transhuman opinion on these issues.

By Singularity Utopia on Dec 05, 2012 at 6:27am

There IS a risk of sapient rogue AI attacking and perhaps destroying Mankind.  There is ALSO a risk that—if we try to avoid the first risk by criminalizing attempts to create sapient AI—the result will be that the people who continue try to create sapient AI will be criminals, with all that this implies in terms of their likely tendency toward irresponsible designs.  And I greatly fear that if we attempt to avoid all risk, we will find ourselves slipping into a path of even greater risk, like a man who tries to avoid a minor fire by leaping off the 40th floor of a skyscraper.

By Jordan S. Bassior on Dec 05, 2012 at 8:26am

I say very little

By Churelle on Dec 05, 2012 at 7:41pm

I am

By Churelle on Dec 05, 2012 at 7:49pm

Mark, thanks for the background and clarification. I’ll check out your work and I hope the Wisdom Institute contributes beneficially to the field. The risk of unfriendly AI is non-zero and the timeframe is unpredictable. Those who believe in friendly AGI should be able to be friendly to each other so we can keep the best possible conversation going on how to deal with the existential risks and existential breakthroughs that we stand to experience together.

By Alex Peake on Dec 06, 2012 at 6:19am

Yet questions still remain:

Will AI be a tool?
or will it be a separate entity under its own control?

Both paths have their own set of significant existential risks for humanity.  I prefer the latter.

Great stuff

By James Suter on Dec 06, 2012 at 3:46pm

“Will AI be a tool?
or will it be a separate entity under its own control?”

Gradually switching from the former to the latter.

I see two reasons not to be overly concerned about AI going rogue:
1. the often quoted merging of bio and tech intelligence.
2. the fact that all AI emerging here will be based on human memetics, which makes it probable that at worst it will be able to do the same unpleasant things to humans and others as have been done by humans for ages.  Which is bad enough, but we survived ourselves so far.

However one should not forget that the main feature of the singularity is unpredictability.

By René Milan on Dec 07, 2012 at 2:44am

All have missed the boat.

While scientists politely debate the possibilities of theoretical killing machines, the Israelis have already set up robot machine-guns with kill-anything-that-moves modes around the Palestinians.  America is already using man-in-the-loop death-from-above machines, so far only acknowledged in Afghanistan / Pakistan, soon to graduate to man-ON-the-loop, mostly autonomous, then pretty much fully autonomous, in a matter of a year or three.  The cruise missile platforms follow almost immediately.

The American military has practically unlimited funding, a mandate to develop the ultimate killing machines at any cost, and dark projects that require no public oversight.  It also has blank-check backing from at least half of the American public.  The Pentagon is currently worried about “when” we get Terminators, how will soldiers feel about taking orders from them?  As Boston Dynamics is launching with a squadron of Big Dogs as soon as they can be built, this is not theoretical.  It’s very practical, and coming soon to a war theater near you. 

The horse is out of the barn already.  Debating about how wide to keep the barn door open misses the reality of politics.

Man’s inhumanity to man also knows no bounds.  Americans have recently wiped out millions of Iraqi soldiers, hundreds of thousands of Iraqi civilians, and the infrastructure of an entire country, on a quest for imaginary nukes, yet show no remorse.  Israel uses armored bulldozers to flatten entire towns, the very definition of forced relocation and ethnic cleansing, yet shows no remorse.  Both the Pentagon and Mossad know Iran has no nuclear weapons and has not decided to build any, yet Israel, with 300 warheads and a fleet of nuclear-armed submarines, insists on wiping Iran out, and America slowly but remorselessly obeys. 

It is absolutely necessary to research and develop artificial morals and ethics in the near term.  This will not stop the military from immoral and unethical practices; witness Israel’s recent usage of white phosphorus on the Gaza population in 2009 [google for images].  In fact, conscientious objector robots will immediately be rehacked and overridden.  But at least we’ll try.

We need artificial morals yesterday.

Only until war becomes rude and out of fashion the world over, will the world be safe.  This requires omniscient macro-financials to let deciding people know how stupid it is.  Even then, 2 out of 3 times people will go with their gut.  Work towards wisdom and enlightenment.  Start with “Nuclear Energy for All; Nuclear Weapons for None”.

By John K. Myers on Dec 27, 2012 at 11:55am

John K. Myers said “Work towards wisdom and enlightenment.”

http://www.digitalWisdomInstitute.org

By Mark Waser on Dec 28, 2012 at 5:01am


Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.







What color is a white cat?



Subscribe

Enter your email address:

Books

Unzipped Genes: Taking Charge of Baby-Making in the New Millennium
Unzipped Genes: Taking Charge of Baby-Making in the New Millennium
The Left Hand of Darkness
The Left Hand of Darkness
Smart, Strong and Sexy at 100?
Smart, Strong and Sexy at 100?
More Books
Videos
XFF 2012 Pre-Party Review
XFF 2012 Pre-Party Review
Quantum Physicist David Deutsch Speaks with Aubrey de Grey
Quantum Physicist David Deutsch Speaks with Aubrey de Grey
Life Extension, Crime, and Criminal Justice - Video by Gennady Stolyarov II
Life Extension, Crime, and Criminal Justice - Video by Gennady Stolyarov II
More Videos