In the politically sensitive comic strip Prickly City a few days ago, a small drone is seen chasing a coyote across a vaguely desert-like terrain. The coyote complains in effect “I know I don’t have identification papers but I’m a coyote. I COME from here.” The drone unthinkingly continues its pursuit.
The March 7 strip finds a conversation about the propriety of such use of drones. The protagonist objects that there is no due process or rule of law in sending drones after people and demands “protections to make sure you don’t just drone people because you don’t like them.” The response is that indeed such protections exist: “Drone Court.”
The morning papers of the same date carry news of Senator Rand Paul filibustering Obama’s designee as CIA chief, John Brennan, until the administration commits to never using drones to kill noncombatant Americans.
Press and television coverage has for many months been saturated with stories of the use of drones in the war against terror, although these drones seem to be of the non-autonomous variety; their deployment and functions seem to be controlled by human beings although at remote locations.
The current (March, 2013) issue of National Geographic carries a surreal article, replete with creepy pictures of creepy drones in the form of moths and hummingbirds, entitled “The Drones Come Home.” Noting that Obama signed a law last year that requires the FAA to open US airspace to drones by September 30, 2015, the article traces the discrete but growing use of what are seemingly unarmed but spying drones by certain State, country and federal (CIA) governmental agencies.
The Boston Globe of Sunday, March 3, Section K (“Ideas” is the name of that section), leads with the following headline: “ROBOTS ON TRIAL—As machines get smarter – and sometimes cause harm – we’re going to need a legal system that can handle them.” In one of the few articles I have seen that appropriately ignores the false distinction between robots and drones, we learn a lot about the ubiquitous nature of the present dialog about machine liability: Harvard Law has a course on “robot rights” (leave it to Harvard to frame everything in terms of inherent rights), many universities host conferences on robotic issues, numerous books are being written (look for Gariel Hallevy’s upcoming “When Robots Kill,” and the more austerely titled book by philosophy professor Samir Chopra entitled “A Legal Theory for Autonomous Artificial Agents”).
My purpose here is to highlight what I consider to be the underappreciated dialog being conducted about THE central issue here: what system of laws ought to be applied to machines when outside human control. Some of the popular dialog focuses on “robots” and some on “drones” but such a distinction interferes with a proper analysis: we have machines here that can kill or cause harm accidentally or on purpose. Do you take the machine to Drone Court, as suggested by Prickly City today, or do you take the manufacturer, or the last human to set the machine on its course, out to the tool shed and tan its corporate or personal hide?
The next post, to follow in the next few days, will detour into what I maintain is the diversion caused by our cultural anthropomorphization of the machines we call “robots” and its possible ramification in the way in which we end up treating autonomous non-human-controlled airplanes, cars, border guards, household servants and electronic girlfriends—all of which should be treated exactly the same because they are all just alloys, motors and computer chips.