So . . . . the Elon Musk anti-AI hype cycle has started up again.
Elon Musk, Warren G, and Nate Dogg: It’s Time to ‘Regulate’ A.I. Like Drugs
Worse, we have the Stuart Russell’s movie, Slaughterbots.
screen-grab from Slaughterbots
Actually, Slaughterbots is exceptionally well-done and really should be mandatory watching for everyone. If you haven’t seen it, go watch it now — it’s less than eight minutes long, “entertaining” and scary AF.
The first problem is that it can be done today by most computer science graduates on a fairly low budget. The only real problem is getting the shaped explosive — and the internet provides all sorts of opportunities and work-arounds for that.
screen-grab from Slaughterbots
The second problem is “slaughterbots” are too effective and “clean” of a solution. Land mines were outlawed for a variety of reasons beyond civilian casualties. Cruise missiles and fully autonomous systems like the Israeli Harpy (used by nine countries and operational since 2008) have generated a lot of protest but are too useful for countries to give up — and they are getting better and better.
But the biggest problem, and what I want to rant about, is the completely muddled framing of the problem as an “AI problem”. In reality, there are four very different and non-overlapping AI problems. There is the weaponry that uses software developed as part of AI research but which is not itself truly “autonomous”. There is the fear of truly autonomous killer robots (aka Arnold Schwarzenegger). There is the already existing problems of humans either intentionally using AI to harm others and sway elections or unintentionally causing problems due to bias and other “black box” shortcomings . There is the rapidly increasing problem of AI replacing humans.
I have argued for years with Noel Sharkey and the International Committee for Robot Arms Control about their rhetorical tactics of conflating the current entirely pre-programmed (and still fairly stupid) weapons with future self-improving fully autonomous robots. Stuart Russell is only backing into the autonomous weaponry debate because he is deathly afraid of future super-intelligent AI. And indeed, most of what the average citizen gets through the news from Elon Musk, Nick Bostrom, the Machine Intelligence Research Institute (MIRI), the Future of Life Institute (FLI) and others is actually weaponized narrative to ensure that their fears are honored.
What we haven’t seen is any effective collaborative action. We’ve seen several “ethics” boards formed — but membership has been strictly limited and there have been almost no published results. MIRI and FLI have sunk a lot of money into one very specific line of research — that precisely follows the errors that prevented AI from making progress for decades. Instead of partisan fear-mongering and calls for regulation with absolutely no details, we need to divide the problem into rational pieces and start proposing rational solutions COLLABORATIVELY.
image provided courtesy of Metric Media
The first step is to separate the problems where humans are responsible (i.e. ALL the current problems) from the problems where machines are responsible (FUTURE). We need to stop nonsensical fear-mongering proclamations like Elon Musk’s claim that humanity has only a 5-10% chance of surviving artificial intelligence. And we need to start investigating ALL avenues together.
Machines with at-least-human intelligence are coming whether we like it or not. There are already many clear and present dangers from the limited AI that we already have. Let’s stop the screaming and get down to business. The future of humanity is on the line.
=================
Let The Game BEGIN!
abbc://debfgh.deigj
abbc://kglimd.igngbjo
abbc://pefmlbjbe.qeb
abbc://jfbgrghgjoneqefjogqbeoogneqhegqh.hmd
November 30, 2017 at 12:01 pm
Neither I nor the International Committee for Robot Arms control have ever had any argument or discussion with Mark Waser. In fact we have never even heard of him before. If we had had contact we could have put him straight on our approach.
While I agree with a lot that has been said in the article, I would like to make two points here to clarify what he has written and to separate the Campaign to Stop Killer Robots from themes about “the rise of the robots”
1. We do not confound SARMO weapons – weapons that Sense and React to Military Objects – with Autonomous Weapons Systems and have written papers and reports to separate the two.
2. The Campaign to Stop Killer Robots does not have a muddled framing of the AI problem. In fact it is very clear, that we wish to get a new international legally binding treaty to ban weapons systems that have the critical functions of target selection and the application of violent force. Our concern is with the type of meaningful human control that is used in weapons systems.
Please do not confuse our humanitarian disarmament proposals with ideas about super intelligence or malevolent AI systems. Those are outside of our remit.
December 4, 2017 at 5:55 pm
So, Noel . . . . the Internet Wayback machine clearly disproves your claim about never having had any argument or discussion with me.
https://web.archive.org/web/20130815191732/http://transhumanity.net/articles/entry/dissecting-the-scientists-call-to-ban-autonomous-lethal-robots
unless, of course, the person in the comment section was someone else impersonating you . . . .
December 4, 2017 at 6:44 pm
I know at hour AGI Lab in Provo we actively disagree and are working on systems that have the key goal of removing the human element from weapon systems. This is vital to maintaining military power and projecting power w/o putting our own soldiers in harm’s way. AI should be what is driving weapon systems in my opinion. We oppose any such ban on such systems unless there is an exclusion for the US Military.
December 22, 2017 at 4:52 am
This is my first time go to see at here and i am actually impressed to read everthing at one
place.