The last six months has seen a rising flood of publicity about “killer robots” and autonomy in weapons systems. On November 19, 2012, Human Rights Watch (HRW) issued a 50-page report “Losing Humanity: The Case against Killer Robots” outlining concerns about “fully autonomous weapons that could select and engage targets without human intervention” and claiming that a “preemptive prohibition on their development and use is needed”. Two days later, the United States Department of Defense released Directive 3000.09 which “assigns responsibility for the development and use of autonomous and semi-autonomous functions in weapon systems”. Now, social media is all abuzz because the “International Committee for Robot Arms Control” (ICRAC) has issued a Scientists’ Call to Ban Autonomous Lethal Robots “in which the decision to apply violent force is made autonomously”.
Arms control is an immediate critical issue. Weapons have already been fielded that are disasters just waiting to happen. The most egregious example described by the HRW report is the Israeli Harpy – a fire-and-forget “loitering attack weapon” designed to autonomously fly to and patrol an assigned area and attack any hostile radar signatures with a high explosive warhead. Indeed, HRW invokes our worst fears by quoting Noel Sharkey’s critique that “it cannot distinguish between an anti-aircraft defense system and a radar placed on a school by an enemy force”. Yet, the Harpy has been sold to Chile, India, South Korea, The People’s Republic of China and Turkey.
The problem is that the HRW report and the ICRAC seem to be far more intent on that still-far-off day when we have self-willed machines rather than focusing on the issues we have here and now. Indeed, and oddly enough, the first paragraph of the summary includes the claims that “Some military and robotics experts have predicted that “killer robots”—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 to 30 years” and “At present, military officials generally say that humans will retain some level of supervision over decisions to use lethal force, but their statements often leave open the possibility that robots could one day have the ability to make such choices on their own power”. Excuse me? How has the Harpy not already fulfilled the first claim years ago? Indeed, does its design not require that it “make such choices” under its own power?
It is difficult for me to determine where HRW and ICRAC are coming from other than some unbridled fear that seems to have overwhelmed any thought of nuanced constructive debate in favor of absolutist propaganda. Their concerns range from the “decision-making” of current South Korean SGR-I sentry robots and Israeli Guardium systems (“decisions about the application of violent force must not be delegated to machines”) to arguments that robots could *never* comply with International Humanitarian Law. Worse, they totally (and irresponsibly) conflate the entirely transparent, deterministic and reproducible following of clearly and rigidly defined algorithms of current systems with the non-repeating decision-making of some future self-modifying self-willed entity. These are all issues that desperately need to be raised and discussed, but the HRW and ICRAC approach will only make public examination and deliberation more difficult.
While I am certainly sympathetic to the motives and concerns behind the “Scientists’ Call”, the problem is that it is, in reality, nothing more than argumentative propaganda that is more likely to feed into the denunciation of science and scientists rather than the clear and nuanced reasoning that is necessary to have any sort of a positive effect on limiting dangerous behavior. Unfortunately, it’s yet another case of “with friends like these, who needs enemies?” with its effects being the opposite of those intended. Ideally, the informed futurist will decline to sign or discuss the ban in favor of immediate, detailed (if less expansive) progress.
In the coming months, I’ll be speaking about both ends of the spectrum—but one at a time and at separate venues. First, I’ll be speaking about the near-term issue of responsibility for automatic algorithm followers at the First Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics in May. Later, I’ll be addressing the eventual issues of self-willed machines, as part of Ethics in the Age of Intelligent Machines at World Future 2013 in July. Hopefully, many of you will be interested in doing the same.
* hero image from http://www.joblo.com/horror-movies/news/new-tv-spot-for-terminator-genisys-shows-new-footage-195
August 8, 2015 at 4:22 am
archived comments:
What on earth is a self-willed machine? Mine and HRW’s interest in this has been motivated by reading the research plans and road maps of all of the US military forces since the early 2000s that have made clear the desire and motivation to use autonomous robot weapons – they give reasonably military reasons.
The plans are well underway with the US x47-b, phantom ray and the UK Taranis – fully autonomous fast subsonic intercontinental combat aircraft – all in advanced stages of testing. Then there is the Chinese – Anjain or Dark Sword – planned supersonic air-to-air fully autonomous jet fighter. If you go and read a bit more you can find out about current autonomous land, air, sea and submarine vehicles. I have no time to present the research results here but you can search to find them in my peer-reviewed journal articles on the topic
These are simple programmed devices that work on the basis of sensors and what roboticists (not philosophers or politicians) call autonomous vehicles. There is no notion of self-willed or any kind of will involved except for the militaries who will use them. These are advanced computerised military weapons – full stop. They are being developed today and we want to stop this automation of warfare.
It would be great if you did some research on what ‘autonomy’ means in the field of robotics and had a good look at the military plans – particularly from US and China and also read the US Department of Defence directive on autonomous weapons, November 21, 2013 – weapons that once launched can select and engage targets without further intervention – before you give your talks. In this way you will be better informed about the people who are trying to prevent these weapons getting into the world’s military arsenals and starting a new arms race.
Machines should not be delegated with the responsibility to take human lives without a human in the decision loop. There is no suggestion in any of the writings on this matter about ‘self-willed’ whatever you mean by that.
best wishes,
noel
By Noel Sharkey on Mar 28, 2013 at 2:00pm
Hi Noel,
Let’s consider the last *bolded* (summary) sentence of the Scientists’ Call: “Decisions about the application of violent force must not be delegated to machines”. If the statement were “We cannot be so irresponsible as to rely upon our predictions of how complex algorithms will interact – particularly under circumstances where hostile forces will attempt to provide misleading input to those algorithms”, I would sign the call in a heartbeat. Instead, the call uses misleading terms like “decisions” and “delegated” that imply some sort of volition or, worse, the possibility that responsibility *could* be delegated – presumably to take advantage of the fear of “terminator”-style “killer robots” (as in the subtitle of HRW’s report).
Your statement “It would be great if you did some research on what ‘autonomy’ means in the field of robotics and had a good look at the military plans” is an extremely disingenuous ad hominem. The second paragraph of my article specifically invoked the immediate clear and present danger of the Harpy, an already deployed example of what *needs* to be banned. The third paragraph then criticizes the first paragraph of the summary of the HRW report for misleadingly claiming “Some military and robotics experts have predicted that “killer robots”—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 to 30 years”. What exactly is your criticism of my knowledge?
Further, even a superficial familiarity with the literature in the field of robotics makes it perfectly obvious that the term “autonomy” is clearly an example of what Marvin Minsky calls a “suitcase” word. My deliberate and clear invocation of the term “self-willed” was precisely an objection to the Scientists’ Call’s use of terms that imply volition and invoke irrational fears while claiming that there is “no notion of self-willed or any kind of will involved except for the militaries who will use them.” If this is truly the case, will you not gratefully accept my rephrasing above and my whole-hearted backing?
Unless your true concern really is those machines that are still 20 to 30 years out, my second paragraph accurately made your case far more immediate and compelling than the HRW’s misleading statement did. Yet, for some reason, you chide *me* for needing a good look at the military’s plans . . . .
My point is that you can’t have your cake and eat it too. You can argue to ban the extremely problematical and dangerous weapons that have already been deployed and I will back you 100%. Or, you can address the concern of extremely advanced machines (where it might start to make sense to speak of them making decisions and delegating responsibility – but then you have to stop making other claims of incapability that would no longer hold true long before any rational person would consider such delegation). But you can’t effectively and honestly do both at once without clearly distinguishing between them.
My point is entirely that HRW and the ICRAC are improperly conflating two radically different arguments. If you do not mean to be doing so, then *please* change your deceptive wording. If you do mean to be arguing two separate points, then please clearly separate them. Both issues desperately need to be clarified and rationally planned for in order to prevent dangerous outcomes. Conflating the two is NOT something that scientists should be doing.
Note: If you are willing, please send any response to transhumanity.net.submissions@gmail.com (or post it as a comment with the statement that it may be reposted as an article). Transhumanity.net would be delighted to publish it as a top-level article rather than having it be lost in the weeds of comments.
By Mark Waser on Mar 29, 2013 at 8:17am
Thank you for this response. It clarifies your objections much more clearly. I cannot speak for HRW – only ICRAC and the statement.
I see now where you are confused about the wording. Thank you for pointing this out. It was actually written, re-written and revised by a considerable number of scientists before positing. Although obviously still not perfect, agreement was reached.
We are actually talking about machines under development and those planned for development as I mentioned in my comment (the US military plans take us to 2032). Of course the Harpy is the cusp machine. We are NOT talking about some sort of super-intelligent terminators.
The word decision is seems to be causing you the difficult and it was discussed and well considered. It is meant strictly in the computing sense of decision (or in the sense of mathematical decision theory. It can be as simple as an If… Then statement or a complex of them.
Any wording attributing ‘will’ or high-level cognition to machines in the statement is unintentional. We are talking about military computational systems making decisions (in the computing sense of decision) to kill people.
If you support that, let us not let the wording stop us from agreeing. I hope that our intentions are clearer now. Others in the field of computing have not interpreted ‘decsion’ in the volitional or human sense but that does not mean that your interpretation is not valid.
I am glad that you support us in spirit if not in wording
best,
noel
By Noel Sharkey on Mar 29, 2013 at 9:00am