A Machine by any other Name…

By and large, drones look like drones.  They are small airplanes, helicopters, missiles.  Where there is an exception (see the photographs of bird-like and insect-like drones in the March, 2013 National Geographic), they nonetheless do not look like human beings at all.  And, drones do not have human-like personalities.

No so for those machines we commonly call “robots.”  Some do look like (and in fact are) vacuum cleaners; some do work on assembly lines and look just like the machines they are; some are wholly functional in appearance, for example those with military applications (e.g., the so-called “sentry robots” utilized to police the Korean demilitarized zone).

But by and large, human beings try to make robots look like human beings, or at least act like them.

Humanizing Machines

It is beyond my ability to know why this is so.  I speculate that as robots come into greater contact with human beings and fulfill more human-like functions, we feel more comfortable trying to remake them entirely.  This urge to the anthropomorphic is deeply rooted in one of our greatest cultural educators: the movies.

I first noticed this irresistible urge to humanize robots while working on a case about fifteen years ago in Pittsburgh.  A robotics team was working on a device that would provide automated directions in museums, airports and other public spaces.  The functionality of the robot had been easily established.  Its voice recognition functions were robust.  However, tremendous effort was being made to imbue this machine with human-like attributes: first, it had to look like a human being; second, it needed a sense of humor and a touch of sarcasm in its pre-programed patter in order to satisfy the designers (and presumably the customers).

The fourth post in this series makes the argument that all machines (drones, robots or whatever we call them) should be subject to the same system of law.  This becomes more important, the more “autonomous” the function of that machine becomes.  By autonomous, in this context, we mean that the machine once deployed by human beings makes its own decisions.  A machine that cannot make its own decisions, or a machine that has the ultimate decision making power reserved to a human being who chooses to push or not push the button, is not the kind of machine we are talking about.

The argument against giving machines total autonomy is that they lack requisite controls, in order to provide the “human element” in the decisional process.  It is thought by many that it is impossible to install the complexity of human judgment, entwined with emotion, into a machine, and that such conclusion should be reflected in the laws that will control liability for errant machines.

I am fearful, however, that we will end up with to different systems of law relating to machines that are improperly categorized as different: drones vs. “robots.”

Are You Ready for Your Close-up, R2D2?

The reason is that we are acculturated to view robots differently, substantially by reason of the movies.  A brief anecdotal summary follows.

We start with the Terminator series.  Theoretically emotionless, ultimately the reformed principal terminator (Schwarzenegger character) is taught emotion and compassion, primarily by a child.  It should be noted that every terminator, good and evil, looks just exactly like a human being.  These machines don’t look like machines.  They look like, and we are invited to relate to them as if they were, human beings.

In the classic Blade Runner movie, it is virtually impossible to distinguish between the robots (“skin jobs” in the movie nomenclature) and real human beings.  The principal robotic protagonist, who saves the life of the hero at the last moment even though they have been locked in mortal combat, is perceived as having learned to revere life itself and, as its dying act, chooses not to take the life of another.  The female “lead” skin job, a rather beautiful young woman, ends up running away with the hero.  The hero knows she is a skin job, and his prior job was to kill skin jobs, yet he becomes so emotionally involved that they end up as a couple, literally flying off into a sun drenched Eden.

In the movie Artificial Intelligence, the young robot is embedded in a family and shows true emotion because he is discriminated against and dis-trusted for his robototism.  The tears look real; the crier is nonetheless merely a machine.

Even when movie robots are not forced to look like human beings, we feel compelled to instill in them human emotions or patterns.  In the Star Wars movies, the non-human-looking robot R2D2 is given human personality and human reactions.  Even the ultimate disembodied robot, Hal in 2001 – a Space Odyssey, ends up humanized.  The disembodied Hal (the computer built into the space vehicle itself, with no separate identifiable physical attributes) has gone rogue and must be unplugged.  As Dave decommissions Hal by pulling out his circuits, one by one, Hal’s unemotional voice takes on a human tone, and the lines given to Hal as he is slowly disconnected are pointedly emotional: “Dave, I don’t feel very well;” “Mary had a little lamb [singing]” near the end of his total disconnection.

The closer we engineer robots to seem human, the more likely we are to view them ashuman.  If this leakage of perception pours over into our legal system, creating dual views of what is “just,” making a distinction between a flying robot that looks like an airplane and carries a warhead, on the one hand, and a “skin job” who serves us food and babysits our children on the other, we will be missing a key perceptual element which is a pre-cursor of an appropriate legal system.  We will be forgetting that they are all simply machines.

The rubber hits the road, in terms of legal systems, as we move to what is known as “autonomous” machines.  A machine which is without ongoing human direction, and which is permitted to make its own “decisions,” will put to a test our ability to remember that the robot and the drone are the same; we call the drone an “it” and we have a tendency to call the robot a “he” or a “she.”  The hoped-for take away from this third post is the following: the robot is an “it,” just like the self-directed missile.

Comments are closed.