A Legal Systems for Machines

In prior posts, we have established the that there is substantial current dialogue about the role of certain intelligent machines; that these machines generally are viewed as falling into the categories of “drone” or “robot”; that, as society perceives the day when drones and robots will have true “autonomy,” the pressure is mounting to establish a legal system which will address misfortunes occasioned by the actions of these machines.  It is important to stop thinking about robots in human terms and to recognize them on the same footing as we recognize drones; there is no difference between an airplane making its own decision to shoot you and a robot (which looks and sounds just like a human being) making its own decision to shoot you.

Human Rights

One significant dialogue is driven by a sensitivity to human rights.  Philosophers and the American Civil Liberties Union focus on these issues.  Machines are impinging upon our privacy and perhaps our freedom by tracking us down, spying on us, impairing our freedom both expressly and implicitly by making us know that every moment we are being watched.

These concerns primarily focus on those kinds of machines we call drones.  It is recognized that most drones currently are not autonomous.  They are not only put forth to function in the world by human beings, but also there is some human control.

In “drone-speak,” we say that these machines have either “human-in the loop” or “human-on the loop” control.  In the first category, humans not only target the person or situation to which the robot is paying attention, but also give the command to act (whether it is to intercept, injure, kill or spy upon).  In the second category, the robotic machine itself both selects the target and makes the decision to undertake the action, but a human operator can over-ride the robot’s action.

The rubber hits the road when we are in a “human out of the loop” situation; in this ultimate machine autonomy, robots select targets and undertake actions without any human input.  All the human decisions are in the history of the truly autonomous robotic device; how to build it, what it looks like, what its programming is, what its capacities are, and when and where it is being put out into the world.  Once the door is opened and the “human out of the loop” robotic device starts moving among us, there is no more direct human control.

The civil liberties critics note that military drones outside of the United States in fact hurt our national security and our moral standing, but also observe a linkage between non-U.S.-deployed drones with military application and impact within the United States.

The Department of Defense “passes down its old stuff to its little siblings,” which means that DOD gifts military equipment to domestic law enforcement agencies without charge.  A primary recipient is the much-criticized Department of Homeland Security.  Indeed, public records suggest that Department of Homeland Security drones are ready to be equipped with weapons, although the Department claims that currently all their drones are unarmed.  (Source: column by Kade Crockford in the online “The Guardian,” as guest blogger for Glenn Greenwald, March 5, 2013).

The surveillance accompanying the domestic use of drones presents problems under the Fourth Amendment to the Constitution, which holds people secure from unreasonable search or seizure.  Obviously you don’t issue to a drone a warrant to spy upon all of us.  Beyond the Fourth Amendment argument, to the extent there is a right of privacy under United States law, drones negatively impact that privacy.  The public has a right to know what rules our governments are bound by, in the utilization of machines to spy upon us, and ultimately (with autonomous machines) to police us.

We are tracked by license plates, by cell phones, by iris scans.  (We are told that these are no different from a fingerprint, although a fingerprint or DNA swab is obtained after there has been an alleged criminal act while machine surveillance is by definition “pre-act;” see particularly the Spielberg movie “Minority Report” in which the government arrests people in advance of the crimes they will ultimately be committing).

Twenty states are considering legislation to limit the use of domestic drones, including Massachusetts.  Certain cities also are taking action.  The focus is privacy and freedom from unreasonable search, on constitutional grounds.  As it is clear that some drones shortly (if they are not already) will be autonomous and will function as machines with “human out of the loop” capacity, the world must evolve towards imposing functional controls on the use of drones.

Killer Robots

The most comprehensive and cogent articulation of the legal issues presented by autonomous machines is contained in a report by the International Human Rights Clinic, part of the Human Rights program at Harvard Law School.  This November, 2012 “Report” is entitled “Losing Humanity-the Case Against Killer Robots.”

The central theme of the Report is that military and robotics experts expect that fully autonomous machines could be developed within the next twenty to thirty years.  As the level of  human supervision over these machines decreases, what laws should be enacted to protect people from actions committed by these machines?  Although the focus of the Report is primarily military (not just drones; robotic border guards, for example, are given attention), it is important to remember that a drone is a robot is a machine; the law should develop the same, whether we choose to package the problem machine into something that looks like a small airplane or something that looks like you and me.

Where does the Report come out?  It concludes that there is no amount of programming, artificial intelligence or any other possible control of a fully autonomous machine that can mimic human thought sufficiently so as to give us the kinds of controls that the “dictates of public conscience” provide to human operators.  All governments should ban the development and production of fully autonomous “weapons;” technology moving toward autonomy of machines should be reviewed at the earliest possible stage to make sure there is no slippage; and roboticists and robotics manufacturers should establish a professional code of conduct consistent with insuring that legal and ethical concerns are met.

The focus of the Report is primarily military.  I suggest that similar kinds of thinking and constraints have to be applied toward what we commonly call “robots” and particularly toward the human-like robots we will tend to surround ourselves with, because a truly autonomous machine is just that: a machine that can make mistakes un-mediated by human controls.

International Law re Weapons

There is international law concerning the utilization of all weapons.  Article 36 of Additional Protocol I to the Geneva Conventions places upon a country developing new weaponry an obligation to determine whether its employment, in some or all circumstances, would violate international law.  In commentary, it is noted that autonomous machines by definition take human beings out of the loop and we run the risk of being mastered by the technology we have deployed (remember in the Terminator when Sky-Net became “self-aware”).

Particularly addressing the proper line of thought, which is to view airplane-like drones and humanized robots the same, the Report states (at 23): “reviews [of nascent technology] should also be sensitive to the fact that some robotic technology, while not inherently harmful, has the potential one day to be weaponized.  As soon as such robots are weaponized, states should initiate their regular, rigorous review process.”

There is much discussion that autonomous robots will be unable to distinguish between civilian population and combatants, and risk harming civilians in violation of the Geneva Convention.  Particular sensitivity to this risk is raised by the so-called Martens Clause, which is actually over a century old and derived from prior international conventions; it charges governments with complying not only with international law but also with “established custom, from the principles of humanity and from the dictates of public conscience.”

How would a machine comply with the Geneva Convention?  It would have to be programmed to recognize international humanitarian law as subtly articulated in various sources, including the Geneva Conventions and “principles of humanity and. . . . dictates of public conscience.”  It would have to determine whether a particular action is prohibited.  It would then have to determine whether such action, if permissible by law, is also permissible under its operational orders (its mission).  It would have to determine whether, in a military setting, a given action met the standard of “proportionality” of response.  It would need to use an algorithm that combines statistical data with “incoming perceptual information” to evaluate a proposed strike on utilitarian grounds.  A machine could act only if it found that action satisfied all ethical constraints, minimized collateral damage and was necessary from the mission standpoint.

It is argued that machines might be able to apply these standards better than human beings, because human beings can be carried away by emotion, while machines cannot.  This is balancing.  There is discussion as to whether artificial intelligence can provide sufficient cognitive and judgmental powers to machines to approximate human capacities in this area.  The Report concludes that this is impossible.  The Report concludes that we are much safer having a human being either “in the loop” or “on the loop;” in the words of robotist Noel Sharkey: “humans understand one another in a way that machines cannot.  Cues can be very subtle, and there are an infinite number of circumstances. . . .”  Remember the computer that ran out of control in the movie War Games and almost set off global thermo-nuclear war?  That was a pretty smart and autonomous machine.  Remember the machines that set off thermo-nuclear war in Dr. Strangelove?  How smart can our machines be?  How much risk can we absorb?

The Report also notes that fully autonomous machines would “be perfect tools of repression for autocrats seeking to seize or regain power.”  Not a friendly thought.  The Report concludes that we should not develop or permit machines which are autonomous weapons.

Peaceful Robots

The same thinking flops over into those machines we call “robots” which one day will be moving among us, bearing substantially human form and preprogrammed “personalities.”  Let us say a robot programmed to provide medical aid is accidently impeded by a human being and makes the judgment that its mission to provide immediate medical assistance leads to the judgment of eliminating the intervening human being.  Let us assume that a robot (or a drone) sees two children carrying realistic toy guns and running toward a sensitive location, chased by a mother calling out “Harry, Joe, please stop, you know I don’t like seeing you play with guns.”  What if the machine misreads that situation in a way that a human being would not?

I submit that a legal system must be imposed with respect to all autonomous machines.

Private Law

Our discussion until now has addressed what I will call public law; what international law and constitutional law ought to do with respect to the control or prohibition of dangerous autonomous machines?  What about private law, or the financial liability that should be ascribed in courts when a machine runs amuck?

We currently have private tort liability laws.  These laws generally provide that a manufacturer is held strictly liable for any damage by a machine that it produces and that is inherently dangerous.  An injured party need not prove negligence of any sort; one just proves the dangerous machine was manufactured by the company.  Such a legal rule creates a wide variety of problems with autonomous machines.

First, strict liability doesn’t make much since when we are talking about machines that are wholly autonomous by definition.  Furthermore, no manufacturer would produce any autonomous machine if this were the rule of law.

Next, who is the manufacturer?  Is it the person who does the nuts and bolts?  Is it the person who does the programming?  Is it the person, the ultimate user, who defines the parameters of functionality (mission) of this combination of nuts, bolts and programs?  Or, is it not logical to hold responsible the last human being, or the employer of the last human being, who turns on the switch that permits an autonomous machine to move out into the world?

Alternately, if there is a problem with a machine, should we actually look to see if there is a design flaw in the manufacturing or in the programming?  This is different from affixing absolute liability, without such inquiry, on the theory that it is an inherently dangerous device.  How many resources would it take in a sophisticated autonomous device to answer that question?

What do you do with the machine itself? Destroy it?  Destroy all similar machines?

What do you do about the potential monetary liability of governments?  For example, our federal government is immune from being sued on a tort theory for any accident that is occasioned during the exercise of governmental powers.  Would this rule not automatically take the federal government and all its agencies off the hook if it sends out into the world a machine that kills or creates damage as part of the discharge of its governmental functions?

Again, the Report concludes that you simply must not develop autonomous weapons, and that you must prohibit governments and manufacturers from doing so.  If that be the rule, I am suggesting that we understand that there is virtually no step between a drone/weapon and a human-appearing robot with capacity to do significant harm.

Finally, to the extent we do in fact end up with autonomous machines flying over our heads, or standing next to us at the bar and ordering an IPA (even though the beer will drain into a metallic waste disposal stomach), what should we do about the private ordering of the law?  I believe that the United States government should establish an insurance program, funded by manufacturers, programmers, designers, and all utilizers of autonomous and semi-autonomous devices, to provide “no fault coverage” as an efficient method of dealing with the liabilities that all of us in society are willingly creating, as our science moves forward without regard to the antiquity of our thinking both with respect to public law and private law.

Comments are closed.