Comments:

You raise interesting questions with your critique, and I have 2 of my own for you. There must be a more complex story behind your own efforts to help with SIAI work which I’m curious about, and I wonder if the camp you identify with now is doing something concrete. Is there any organized group or body of work that represents the other camp you mention that is working on an “acceptable existential risk” approach to safe enough AI? I’m not familiar with any. Perhaps the only people willing to tackle a problem seem extreme because they’re the ones who take it seriously enough to work on it, and the “moderates” aren’t motivated enough to make sustained organized efforts?

By Alex Peake on Dec 04, 2012 at 2:32pm

Excellent points and questions, Alex.  The story behind my efforts to help with SIAI is not complex at all.  They disagree with my positions and want nothing to do with me.  If I were some random crank, that probably would be their best option. 

On the other hand, my arguments have been passing the necessary peer-review to be presented at professional conferences like Artificial General Intelligence (2009, 2010, and 2011), Biologically Inspired Cognitive Architectures (2008, 2009, 2010, 2011, 2012) , the Singularity Track at ECAP 2010, etc. and have been very well-received.  The one example of our limited interaction that can easily be seen (from the one time they *did* feel the need to interact with me after my presentation at AGI2011) is available at http://vimeo.com/channels/agi10#15504215 (my original presentation) and http://vimeo.com/channels/agi10#20744085 (Roko Mijic’s presentation, my reply to a question, and then a debate between the two of us).

The “camp I now identify with” has just recently started to get organized as a single entity (http://digitalwisdominstitute.org). In the meantime, my personal body of work is accessible through http://becominggaia.wordpress.com/papers/ (except I need to get my November BICA 2012 presentation posted).  Hopefully that link will make me look “extreme” to you because I do take it seriously enough to spend fairly large amounts of time and money on it without being paid.

On the other hand, the SIAI has not produced anything to advance their zero-risk path research since Eliezer Yudkowsky’s 2004 Collective Extrapolated Volition paper (http://singularity.org/files/CEV.pdf).  That is a HUGE problem since all AGI research is supposed to wait upon their research.  They *have* published a lot since then—just all of it hammering the existential risk point and none of it extending their proposed solution.

Finally, my personal history and details really shouldn’t be relevant unless they caused me to introduce inaccuracies or a horribly unfair slant into the article above.  As I freely acknowledge, I could be horribly wrong in all of my arguments.  But even if I am wrong, the facts and quotes that I point to (which can all be verified) are *extremely* problematical.

By Mark Waser on Dec 04, 2012 at 4:16pm

I think that so long as an A.I was merged with a human brain you could mitigate a great deal of the risk by starting off from a “friendly” place with the A.I esp if it was an extension of you. However spontaneously generated strong A.I’s seem to me to hold a high chance of going rogue. That’s assuming we would be of any interest to them at all. But nothing is going to slow down the race to a strong A.I and as computer power increases the chance of some random person making one in their garage grows in likely hood. I think we will end up with swarms of them some “good’ some “bad” and some indifferent.

By Elliott on Dec 04, 2012 at 4:59pm

Why is there no mention of the obvioius risks of using electricity,or modern vehicular transportaation etc. If zero risk would have been insisted upon before the adaptation of these inventions we would still be riding horses and buggies and using candle light. The safety measures now in place were developed as the technology developed. I don’t think that progress will be stopped for a zero risk solution to often unforseeable risks… nor should it be.

By Mike Casdi on Dec 04, 2012 at 11:55pm

Ironically Mark the biggest risk, I’d say the only risk, comes from those who fear there is a risk. The risk is so insignificant it is not even worth considering. You don’t even need to program intelligent AGIs to avoid existential risk, you simply need to create intelligence. The risk is created by fearing the risk.

It is ludicrous that you have been threatened, it highlights the irrationality of the people who excessively fear risk. The type of fascists who make such threats are the real danger because they represent tyranny. Tyranny causes the biggest risks to occur.

Bravo, great article. There are some minor points I disagree with but on the whole it is excellent. It is so frustrating trying to counter the mainstream pessimists who have a fearful paranoid view of the future. They tend to have greater funding thus media access, which presents a distorted view of Transhuman opinion on these issues.

By Singularity Utopia on Dec 05, 2012 at 6:27am

There IS a risk of sapient rogue AI attacking and perhaps destroying Mankind.  There is ALSO a risk that—if we try to avoid the first risk by criminalizing attempts to create sapient AI—the result will be that the people who continue try to create sapient AI will be criminals, with all that this implies in terms of their likely tendency toward irresponsible designs.  And I greatly fear that if we attempt to avoid all risk, we will find ourselves slipping into a path of even greater risk, like a man who tries to avoid a minor fire by leaping off the 40th floor of a skyscraper.

By Jordan S. Bassior on Dec 05, 2012 at 8:26am

I say very little

By Churelle on Dec 05, 2012 at 7:41pm

I am

By Churelle on Dec 05, 2012 at 7:49pm

Mark, thanks for the background and clarification. I’ll check out your work and I hope the Wisdom Institute contributes beneficially to the field. The risk of unfriendly AI is non-zero and the timeframe is unpredictable. Those who believe in friendly AGI should be able to be friendly to each other so we can keep the best possible conversation going on how to deal with the existential risks and existential breakthroughs that we stand to experience together.

By Alex Peake on Dec 06, 2012 at 6:19am

Yet questions still remain:

Will AI be a tool?
or will it be a separate entity under its own control?

Both paths have their own set of significant existential risks for humanity.  I prefer the latter.

Great stuff

By James Suter on Dec 06, 2012 at 3:46pm

“Will AI be a tool?
or will it be a separate entity under its own control?”

Gradually switching from the former to the latter.

I see two reasons not to be overly concerned about AI going rogue:
1. the often quoted merging of bio and tech intelligence.
2. the fact that all AI emerging here will be based on human memetics, which makes it probable that at worst it will be able to do the same unpleasant things to humans and others as have been done by humans for ages.  Which is bad enough, but we survived ourselves so far.

However one should not forget that the main feature of the singularity is unpredictability.

By René Milan on Dec 07, 2012 at 2:44am

All have missed the boat.

While scientists politely debate the possibilities of theoretical killing machines, the Israelis have already set up robot machine-guns with kill-anything-that-moves modes around the Palestinians.  America is already using man-in-the-loop death-from-above machines, so far only acknowledged in Afghanistan / Pakistan, soon to graduate to man-ON-the-loop, mostly autonomous, then pretty much fully autonomous, in a matter of a year or three.  The cruise missile platforms follow almost immediately.

The American military has practically unlimited funding, a mandate to develop the ultimate killing machines at any cost, and dark projects that require no public oversight.  It also has blank-check backing from at least half of the American public.  The Pentagon is currently worried about “when” we get Terminators, how will soldiers feel about taking orders from them?  As Boston Dynamics is launching with a squadron of Big Dogs as soon as they can be built, this is not theoretical.  It’s very practical, and coming soon to a war theater near you. 

The horse is out of the barn already.  Debating about how wide to keep the barn door open misses the reality of politics.

Man’s inhumanity to man also knows no bounds.  Americans have recently wiped out millions of Iraqi soldiers, hundreds of thousands of Iraqi civilians, and the infrastructure of an entire country, on a quest for imaginary nukes, yet show no remorse.  Israel uses armored bulldozers to flatten entire towns, the very definition of forced relocation and ethnic cleansing, yet shows no remorse.  Both the Pentagon and Mossad know Iran has no nuclear weapons and has not decided to build any, yet Israel, with 300 warheads and a fleet of nuclear-armed submarines, insists on wiping Iran out, and America slowly but remorselessly obeys. 

It is absolutely necessary to research and develop artificial morals and ethics in the near term.  This will not stop the military from immoral and unethical practices; witness Israel’s recent usage of white phosphorus on the Gaza population in 2009 [google for images].  In fact, conscientious objector robots will immediately be rehacked and overridden.  But at least we’ll try.

We need artificial morals yesterday.

Only until war becomes rude and out of fashion the world over, will the world be safe.  This requires omniscient macro-financials to let deciding people know how stupid it is.  Even then, 2 out of 3 times people will go with their gut.  Work towards wisdom and enlightenment.  Start with “Nuclear Energy for All; Nuclear Weapons for None”.

By John K. Myers on Dec 27, 2012 at 11:55am

John K. Myers said “Work towards wisdom and enlightenment.”

http://www.digitalWisdomInstitute.org

By Mark Waser on Dec 28, 2012 at 5:01am

Books

The Apartheid of Sex: Manifesto on the Freedom of Gender
The Apartheid of Sex: Manifesto on the Freedom of Gender
More Books
Videos
Aubrey de Grey - Why It Is a Sin NOT to Strive to Develop Medicine That Eliminates Aging
Aubrey de Grey - Why It Is a Sin NOT to Strive to Develop Medicine That Eliminates Aging
More Videos