The last six months has seen a rising flood of publicity about “killer robots” and autonomy in weapons systems.  On November 19, 2012, Human Rights Watch (HRW) issued a 50-page report “Losing Humanity: The Case against Killer Robots” outlining concerns about “fully autonomous weapons that could select and engage targets without human intervention” and claiming that a “preemptive prohibition on their development and use is needed”.  Two days later, the United States Department of Defense released Directive 3000.09 which “assigns responsibility for the development and use of autonomous and semi-autonomous functions in weapon systems”.  Now, social media is all abuzz because the “International Committee for Robot Arms Control” (ICRAC) has issued a Scientists’ Call to Ban Autonomous Lethal Robots “in which the decision to apply violent force is made autonomously”.

Arms control is an immediate critical issue.  Weapons have already been fielded that are disasters just waiting to happen.  The most egregious example described by the HRW report is the Israeli Harpy – a fire-and-forget “loitering attack weapon” designed to autonomously fly to and patrol an assigned area and attack any hostile radar signatures with a high explosive warhead.  Indeed, HRW invokes our worst fears by quoting Noel Sharkey’s critique that “it cannot distinguish between an anti-aircraft defense system and a radar placed on a school by an enemy force”.  Yet, the Harpy has been sold to Chile, India, South Korea, The People’s Republic of China and Turkey.

The problem is that the HRW report and the ICRAC seem to be far more intent on that still-far-off day when we have self-willed machines rather than focusing on the issues we have here and now.  Indeed, and oddly enough, the first paragraph of the summary includes the claims that “Some military and robotics experts have predicted that “killer robots”—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 to 30 years” and “At present, military officials generally say that humans will retain some level of supervision over decisions to use lethal force, but their statements often leave open the possibility that robots could one day have the ability to make such choices on their own power”.  Excuse me?  How has the Harpy not already fulfilled the first claim years ago?  Indeed, does its design not require that it “make such choices” under its own power?

It is difficult for me to determine where HRW and ICRAC are coming from other than some unbridled fear that seems to have overwhelmed any thought of nuanced constructive debate in favor of absolutist propaganda.  Their concerns range from the “decision-making” of current South Korean SGR-I sentry robots and Israeli Guardium systems (“decisions about the application of violent force must not be delegated to machines”) to arguments that robots could *never* comply with International Humanitarian Law.  Worse, they totally (and irresponsibly) conflate the entirely transparent, deterministic and reproducible following of clearly and rigidly defined algorithms of current systems with the non-repeating decision-making of some future self-modifying self-willed entity.  These are all issues that desperately need to be raised and discussed, but the HRW and ICRAC approach will only make public examination and deliberation more difficult.

While I am certainly sympathetic to the motives and concerns behind the “Scientists’ Call”, the problem is that it is, in reality, nothing more than argumentative propaganda that is more likely to feed into the denunciation of science and scientists rather than the clear and nuanced reasoning that is necessary to have any sort of a positive effect on limiting dangerous behavior.  Unfortunately, it’s yet another case of “with friends like these, who needs enemies?” with its effects being the opposite of those intended.  Ideally, the informed futurist will decline to sign or discuss the ban in favor of immediate, detailed (if less expansive) progress.

In the coming months, I’ll be speaking about both ends of the spectrum—but one at a time and at separate venues.  First, I’ll be speaking about the near-term issue of responsibility for automatic algorithm followers at the First Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics in May.  Later, I’ll be addressing the eventual issues of self-willed machines, as part of Ethics in the Age of Intelligent Machines at World Future 2013 in July.  Hopefully, many of you will be interested in doing the same.

* hero image from http://www.joblo.com/horror-movies/news/new-tv-spot-for-terminator-genisys-shows-new-footage-195