A Machine by any other Name…

By and large, drones look like drones.  They are small airplanes, helicopters, missiles.  Where there is an exception (see the photographs of bird-like and insect-like drones in the March, 2013 National Geographic), they nonetheless do not look like human beings at all.  And, drones do not have human-like personalities.

No so for those machines we commonly call “robots.”  Some do look like (and in fact are) vacuum cleaners; some do work on assembly lines and look just like the machines they are; some are wholly functional in appearance, for example those with military applications (e.g., the so-called “sentry robots” utilized to police the Korean demilitarized zone).

But by and large, human beings try to make robots look like human beings, or at least act like them.

Humanizing Machines

It is beyond my ability to know why this is so.  I speculate that as robots come into greater contact with human beings and fulfill more human-like functions, we feel more comfortable trying to remake them entirely.  This urge to the anthropomorphic is deeply rooted in one of our greatest cultural educators: the movies.

I first noticed this irresistible urge to humanize robots while working on a case about fifteen years ago in Pittsburgh.  A robotics team was working on a device that would provide automated directions in museums, airports and other public spaces.  The functionality of the robot had been easily established.  Its voice recognition functions were robust.  However, tremendous effort was being made to imbue this machine with human-like attributes: first, it had to look like a human being; second, it needed a sense of humor and a touch of sarcasm in its pre-programed patter in order to satisfy the designers (and presumably the customers).

The fourth post in this series makes the argument that all machines (drones, robots or whatever we call them) should be subject to the same system of law.  This becomes more important, the more “autonomous” the function of that machine becomes.  By autonomous, in this context, we mean that the machine once deployed by human beings makes its own decisions.  A machine that cannot make its own decisions, or a machine that has the ultimate decision making power reserved to a human being who chooses to push or not push the button, is not the kind of machine we are talking about.

The argument against giving machines total autonomy is that they lack requisite controls, in order to provide the “human element” in the decisional process.  It is thought by many that it is impossible to install the complexity of human judgment, entwined with emotion, into a machine, and that such conclusion should be reflected in the laws that will control liability for errant machines.

I am fearful, however, that we will end up with to different systems of law relating to machines that are improperly categorized as different: drones vs. “robots.”

Are You Ready for Your Close-up, R2D2?

The reason is that we are acculturated to view robots differently, substantially by reason of the movies.  A brief anecdotal summary follows.

We start with the Terminator series.  Theoretically emotionless, ultimately the reformed principal terminator (Schwarzenegger character) is taught emotion and compassion, primarily by a child.  It should be noted that every terminator, good and evil, looks just exactly like a human being.  These machines don’t look like machines.  They look like, and we are invited to relate to them as if they were, human beings.

In the classic Blade Runner movie, it is virtually impossible to distinguish between the robots (“skin jobs” in the movie nomenclature) and real human beings.  The principal robotic protagonist, who saves the life of the hero at the last moment even though they have been locked in mortal combat, is perceived as having learned to revere life itself and, as its dying act, chooses not to take the life of another.  The female “lead” skin job, a rather beautiful young woman, ends up running away with the hero.  The hero knows she is a skin job, and his prior job was to kill skin jobs, yet he becomes so emotionally involved that they end up as a couple, literally flying off into a sun drenched Eden.

In the movie Artificial Intelligence, the young robot is embedded in a family and shows true emotion because he is discriminated against and dis-trusted for his robototism.  The tears look real; the crier is nonetheless merely a machine.

Even when movie robots are not forced to look like human beings, we feel compelled to instill in them human emotions or patterns.  In the Star Wars movies, the non-human-looking robot R2D2 is given human personality and human reactions.  Even the ultimate disembodied robot, Hal in 2001 – a Space Odyssey, ends up humanized.  The disembodied Hal (the computer built into the space vehicle itself, with no separate identifiable physical attributes) has gone rogue and must be unplugged.  As Dave decommissions Hal by pulling out his circuits, one by one, Hal’s unemotional voice takes on a human tone, and the lines given to Hal as he is slowly disconnected are pointedly emotional: “Dave, I don’t feel very well;” “Mary had a little lamb [singing]” near the end of his total disconnection.

The closer we engineer robots to seem human, the more likely we are to view them ashuman.  If this leakage of perception pours over into our legal system, creating dual views of what is “just,” making a distinction between a flying robot that looks like an airplane and carries a warhead, on the one hand, and a “skin job” who serves us food and babysits our children on the other, we will be missing a key perceptual element which is a pre-cursor of an appropriate legal system.  We will be forgetting that they are all simply machines.

The rubber hits the road, in terms of legal systems, as we move to what is known as “autonomous” machines.  A machine which is without ongoing human direction, and which is permitted to make its own “decisions,” will put to a test our ability to remember that the robot and the drone are the same; we call the drone an “it” and we have a tendency to call the robot a “he” or a “she.”  The hoped-for take away from this third post is the following: the robot is an “it,” just like the self-directed missile.

Drones, Robots, Laws and Analytical Confusions

This post is the first of four which in the aggregate address the “drone/robot” issue.  What issue, you ask?

The popular press is saturated with discussion of drones.  The discussion is ubiquitous.  It is framed in terms of rights of privacy, constitutional rights, humanitarian considerations, risks of dictatorship and the definition of proper foreign policy.  We will quickly albeit anecdotally survey this discussion in the next (second) post.

The third post traces the artificial dichotomy between drones and robots, a culturally driven distinction that causes us to fail to engage  the legal and moral issues presented by each, which are the same.  I propose an anecdotal approach to understanding the role of movies in creating this false cultural distinction.

The fourth and last post touches upon some of the legal issues presented by drones and robots.  The discussions generally track the artificial division noted above, but at base present the same issues of law: how does domestic law address robots which injure and kill; how does international law address the robots we call drones which injure and kill; what if any global prohibitions should we attempt to impose to prevent autonomy of action on the part of machines?

You should note that all domestic and international lawyers considering these matters (at least, all I have read) believe that current laws are wholly out of tune with 21st century issues such as these.  I suggest that few observers are framing the matter in terms of a common definition:  what is the “law” of machines that have functional autonomy.

These posts are not  final or even detailed analyses.  Hopefully they will serve to foster discussion, not attract mere critique.  We have a common problem here of the most fascinating kind: science, human nature and the law stand yet again at radically different evolutionary places.  As the science seems unstoppable, human beings and their legal systems better start thinking about these matter right now.

Drone Court

In the politically sensitive comic strip Prickly City a few days ago, a small drone is seen chasing a coyote across a vaguely desert-like terrain.  The coyote complains in effect “I know I don’t have identification papers but I’m a coyote.  I COME from here.”  The drone unthinkingly continues its pursuit.

The March 7 strip finds a conversation about the propriety of such use of drones.  The protagonist objects that there is no due process or rule of law in sending drones after people and demands “protections to make sure you don’t just drone people because you don’t like them.”  The response is that indeed such protections exist: “Drone Court.”

The morning papers of the same date carry news of Senator Rand Paul filibustering Obama’s designee as CIA chief, John Brennan, until the administration commits to never using drones to kill noncombatant Americans.

Press and television coverage has for many months been saturated with stories of the use of drones in the war against terror, although these drones seem to be of the non-autonomous variety; their deployment and functions seem to be controlled by human beings although at remote locations.

The current (March, 2013) issue of National Geographic carries a surreal article, replete with creepy pictures of creepy drones in the form of moths and hummingbirds, entitled “The Drones Come Home.”  Noting that Obama signed a law last year that requires the FAA to open US airspace to drones by September 30, 2015, the article traces the discrete but growing use of what are seemingly unarmed but spying drones by certain State, country and federal (CIA) governmental agencies.

The Boston Globe of Sunday, March 3, Section K (“Ideas” is the name of that section), leads with the following headline: “ROBOTS ON TRIAL—As machines get smarter – and sometimes cause harm – we’re going to need a legal system that can handle them.”  In one of the few articles I have seen that appropriately ignores the false distinction between robots and drones, we learn a lot about the ubiquitous nature of the present dialog about machine liability:  Harvard Law has a course on “robot rights” (leave it to Harvard to frame everything in terms of inherent rights), many universities host conferences on robotic issues, numerous books are being written (look for Gariel Hallevy’s upcoming “When Robots Kill,” and the more austerely titled book by philosophy professor Samir Chopra entitled “A Legal Theory for Autonomous Artificial Agents”).

My purpose here is to highlight what I consider to be the underappreciated dialog being conducted about THE central issue here: what system of laws ought to be applied to machines when outside human control.  Some of the popular dialog focuses on “robots” and some on “drones” but such a distinction interferes with a proper analysis: we have machines here that can kill or cause harm accidentally or on purpose.  Do you take the machine to Drone Court, as suggested by Prickly City today, or do you take the manufacturer, or the last human to set the machine on its course, out to the tool shed and tan its corporate or personal hide?

The next post, to follow in the next few days, will detour into what I maintain is the diversion caused by our cultural anthropomorphization of the machines we call “robots” and its possible ramification in the way in which we end up treating autonomous  non-human-controlled airplanes, cars, border guards, household servants and electronic girlfriends—all of which should be treated exactly the same because they are all just alloys, motors and computer chips.

Let the Sun Shine In

Many of us are vaguely aware of the so-called Sunshine Act, part of the ObamaCare legislation.  Briefly put, this Federal statute requires manufacturers of medical devices and pharmaceuticals to make public disclosure of many payments to licensed physicians and to academic research hospitals (but not other hospitals).

The statute is enormously complex; it is one of those statutes that, regrettably, likely requires manufacturers, physicians and research hospitals to seek the advice of counsel.  Final rules were promulgated by the Centers for Medicare and Medicaid Services (“CMS”) earlier this month.  In very broad outline:

  • Covered Manufacturers must start collecting data as of August 1st of this year.
  • By March 31, 2014, and for every full year thereafter, a covered manufacturer must provide information about payments to CMS.
  • Annually CMS will publish a report naming those people who paid monies to physicians and academic hospitals in connection with any “Covered Product,” for the whole world to see.

There are granular and specific rules concerning Group Purchasing Organizations (“GPOs”) and physician owned distributors (“PODs”).  And with respect to all else, the devil is in the details.

Covered Products include any drug, device, biologic or medical supply with respect to which there is “available” payments under Medicare, Medicaid or CHIPs (Children’s Health Insurance Programs).  Covered Manufacturers do not include either distributors or wholesalers who don’t hold title to Covered Products, or manufacturers for internal use only (including use by the manufacturer with its own patients).

There must be reporting of most payments or other transfers of value, and value is measured as “on the open market.”  Payments may be made directly by a manufacturer, or through a third party instructed or controlled by a manufacturer; but manufacturers need report only those things of which they have actual or constructive knowledge.

There are some rather bizarre exceptions for providing things of value worth less than $10; picture a medical conference where a manufacturer can provide a tuna fish sandwich but not a lobster roll.  There are also exceptions for certain but not all educational materials, certain but not all product samples (or coupons for product samples) if utilized by patients, and for discounts or rebates on products purchased by doctors.  There are also rules concerning how manufacturers can avoid reporting while sponsoring  accredited or certified continuing medical education programs under the auspices of five specific professional organizations (such as the American Medical Association).

The statutory purpose is to bring  into daylight the benefits that manufacturers may provide  to physicians in order to induce physicians to utilize products of those manufacturers.  This is a reporting statute; nothing is banned.  It is designed to reach all sorts of benefits provided to doctors, whether they are payments, funding of projects, grants of stock or options or other ownership evidences, lavish travel or meals, whatever.

As you might have imagined, both manufacturers and physicians are going to want to take a careful look at the information that will be published about them.  The statute provides a forty-five day period for review of proposed publications during which manufacturers, physicians and academic hospitals may review the text of proposed publication, followed by a fifteen day period to object.  If the objection cannot be resolved, the information will be published anyway (with a footnote saying that it is disputed).

Certain but not all provisions of state laws now on the books are pre-empted by this statute.  Other provisions are not.  Any given situation in any given state must be analyzed on its own merits.  In addition to the survival of certain but by not all state regulation of similar import, the anti-kickback laws continue to apply in parallel, as do other Federal laws such as the Stark Act.

Lots of information and forms can be obtained at www.cms.gov/regulations-and-guidance/legislation/national-physician-payment-transparency-program/index.html. 

Covered Manufacturers and potentially covered manufacturers need to start now to put into place data gathering procedures and technologies so as to be ready to comply starting this August.  Good luck to all.

The Craft Beer Business

CEO and Harpoon Brewery founder Rich Doyle shared his marketing plans at the February 14th breakfast at the Association for Corporate Growth/Boston.  Plans include:

  • Aggressively promoting his new Boston beer garden and restaurant both for the general public and for private groups, with a focus on trying to identify Harpoon Brewery with the Boston experience.  He expects 200,000 visitors in the Boston Brewery in the next year; their Brewery in Vermont, at the old Catamount facility, also draws about 10,000 visitors a month.

 

  • Use of social media; the company has hired a new director of digital marketing and his task is to replace an email approach with a Twitter/Facebook approach.

 

  • Beer Fests: started around 1990, a low point in the company history where they were down to five employees, Beer Fest each year now attracts about 18,000 visitors.

 

  • Local involvement in charitable events and clean-up drives, with respect to which the company reaches out to its consumer base for volunteer participants.

 

  • Publicity within the consumer base, which is organized by, and communicated with by reference to, zip codes.

Harpoon is the only Boston based and Boston manufactured craft beer.  Craft beers make up about 6% of the United States market and Harpoon claims to be the eighth largest among these smaller brewers.  With sales about $50,000,000 last year, Harpoon distributes through 26 states as far West as Texas; its growth plans do not include making significant changes in either their product line or their geographic distribution, as they see ample growth opportunities within their existing markets and product lines.

One interesting fact for beer drinkers: while bottles for craft beer presently are and will remain far more significant than aluminum cans, Doyle anticipates growth in canned distribution because cans are more mobile, and furthermore are more popular in the South than in the North.

Doyle is a reformed New York investment banker who got the idea to start a brewery by writing a brewery business plan as part of his MBA requirement.  Throughout his presentation, he took long drinks – from a water bottle.  But then again, it was a breakfast meeting.

Tall Pygmies and CEO Succession

“Never measure the tallest of the pygmies.”

This advice comes from George Davis, Boston Managing Partner for the national search firm Egon Zehnder, commenting on the appropriate way to search for a CEO following a corporate merger.  Since CEOs should be selected based upon whether they fulfill future requirements for skill set and experience, it is possible that the existing CEOs and other C level executives of both constituent companies in a merger are not “tall enough”  to meet that standard; consideration should be given to looking to an outside player.   One must avoid the “brokered deal” where, for example, one senior position is given to the CEO of one constituent, another C level position to the CEO of the other constituent.

Davis spoke at the February 12th meeting of the National Association of Corporate Directors/New England, which meeting focused on the role of directors in CEO succession.  Other major takeaways:

  • It is appropriate for the board to ask of a CEO: “what do you want from your board of directors that you are not getting?”  (Pamela Godwin, President of Change Partners of Philadelphia and board member of Unum Group [NYSE]).

 

  • Boards should be  proactive in driving  CEO succession planning, and one device to consider is an educational/training session for the board, during a retreat, where lawyers or HR professionals discuss the fiduciary role of directors in planning succession.

 

  • The job description for a CEO should be updated annually to reflect strategic changes in a company’s business which may alter the criteria for the best choice of CEO (William Messenger, Director at ArQule, Inc.).

 

Since changing times may require changing strategy, to be implemented by a new CEO from outside the organization, it is essential prior to the arrival of that new CEO for the board to make sure that the executive ranks are educated as to these new challenges and the appropriateness of the changes that a new CEO may bring (Ellen Zane, CEO Emeritus of Tufts Medical Center).

Over the last five years, the average tenure of a CEO (based on a survey of many public and private companies) has shrunk from 7.3 years to 4.4 years.  Thus, focus on CEO succession is becoming more important.  Half the members of boards surveyed believe that their succession planning is inadequate; only 33% have a well-documented succession planning process.

The panel also agreed on the need to train internal people as possible CEO successors.  Although internal executives may not make it to CEO, a plan to rotate them through different functions and (if they are board members themselves) different committees should be presented to them  as building their own professional skills, and not as a step in a “horse race” to the top.  The panel also agreed that CEO selection is the task of the board, but that filling slots below the CEO level is the role of the CEO, with the board asking appropriate questions to make sure that the task is being handled properly.

Corporate Minutes as Trojan Horse

If you are involved with corporate governance, if you sit on any board including a non-profit, if you are a fan of keeping complete minutes of meetings as an archival record of what was discussed and decided– I suggest you read my current article about how to properly record meeting minutes.  It ain’t so easy as you might think….  The article appears on page 14 of the current issue of Mass Lawyers Weekly and shortly will be linked to my bio on the firm website but if you are interested in an advance copy please just send me an email.

The answer to good corporate minutes, by the way, is brevity while describing the process of discussion, not recording the substance of that discussion.

Update on Foreign Corrupt Practices Act

You might want to click to my recent article on the FCPA, which is rapidly becoming a huge problem for companies doing business overseas.  It is possible for a US company to get into deep financial trouble by reason of the actions of its sales reps and overseas partners; suggestions as to risk containment are included in text.  Enjoy.

New Rules for US Patents

On March 16, the most important part of the 2011 America Invents Act takes effect, and conforms the US patent system with the rest of the world.  On that date, we switch to measuring patent priority based on the first company to file an application.  Our old system is based on priority of invention, which meant that you got the patent if you could prove by notebooks and the like that you had the patentable idea first.

Other aspects of the Act already have taken effect.  Third parties now can themselves in effect intervene by providing proof of prior art, thereby short-circuiting a given application as lacking in novelty.  We have also added a procedure by which, for nine months after a patent issues, anyone can challenge it post-issuance.

What is the effect of the new system?.  We likely will see applications filed early, of course.  We will see larger companies monitoring filings so as to promptly provide proof of prior art and thus try to head off patent issuances.  We may see it become more difficult to obtain patents and that may further impact a difficult venture finance market.  We likely will see companies avail themselves of accelerated processing which will be available upon payment of an additional fee. And hopefully we will see the PTO eating into its substantial backlog of applications as it opens more offices and hires more staff with its increasing fee schedule.

Meanwhile the life science community has gotten in the habit of closely watching the US Supreme Court, which about a year ago (in the Prometheus case) put in doubt some patents covering methods and diagnostics.  A larger shoe may drop by this summer when the Court decides the Myriad Genetics case and addresses the patentability of genes and isolated DNA.

Financing Life Science Start-Ups

Those who despair of our ability to inspire or fund start-ups could take a lesson from Israel’s Chief Scientist Office.  Here is how the CSO does it:

* Pre-seed, the CSO may fund 100% of costs.

*Promising ideas are funded for two or three years at 85%;  these companies are placed in incubators.  The CSO takes no equity and is repaid, if at all, from a royalty on sales.

*The incubators are for-profit private companies that fund the other 15%, so they have skin in the game.  The incubators get between 30% and 50% of the equity.

*The founders are not expected to invest their own monies.  They get 50% to 70% of the initial equity.

What kinds of life science companies are incubated?  About 55% are med device companies, and 30% are biopharmas.  About 200 companies now are resident in 25 or so government supported incubator facilities.  Recent experience is that about 90% of recently incubated companies have been successful in obtaining additional outside financing, reflecting an improving quality of enterprise being accepted into the program.

More mature companies may continue to get CSO funds, ranging from 30% to 75% of needs.  Additionally, the incubators themselves are free to participate in later equity rounds to increase or protect their stakes.

Israel invests about 4.5% of its GDP in life science, by far the highest percentage of any country, and about double the US percentage.  48% of incubated life science companies (all incubated over the last six years or so) have reached the revenue stage; 60 life science companies are listed on the Tel Aviv stock exchange (where, I am told [having no direct experience] that listing of very small companies is typical).

(Thanks to David Barone of Boston MedTech Advisors for some of the information and all of the statistics cited above.  David was addressing a presentation by emerging Israeli med device companies held January 29 at Newton Wellelsey Hospital.)