A Legal Systems for Machines

In prior posts, we have established the that there is substantial current dialogue about the role of certain intelligent machines; that these machines generally are viewed as falling into the categories of “drone” or “robot”; that, as society perceives the day when drones and robots will have true “autonomy,” the pressure is mounting to establish a legal system which will address misfortunes occasioned by the actions of these machines.  It is important to stop thinking about robots in human terms and to recognize them on the same footing as we recognize drones; there is no difference between an airplane making its own decision to shoot you and a robot (which looks and sounds just like a human being) making its own decision to shoot you.

Human Rights

One significant dialogue is driven by a sensitivity to human rights.  Philosophers and the American Civil Liberties Union focus on these issues.  Machines are impinging upon our privacy and perhaps our freedom by tracking us down, spying on us, impairing our freedom both expressly and implicitly by making us know that every moment we are being watched.

These concerns primarily focus on those kinds of machines we call drones.  It is recognized that most drones currently are not autonomous.  They are not only put forth to function in the world by human beings, but also there is some human control.

In “drone-speak,” we say that these machines have either “human-in the loop” or “human-on the loop” control.  In the first category, humans not only target the person or situation to which the robot is paying attention, but also give the command to act (whether it is to intercept, injure, kill or spy upon).  In the second category, the robotic machine itself both selects the target and makes the decision to undertake the action, but a human operator can over-ride the robot’s action.

The rubber hits the road when we are in a “human out of the loop” situation; in this ultimate machine autonomy, robots select targets and undertake actions without any human input.  All the human decisions are in the history of the truly autonomous robotic device; how to build it, what it looks like, what its programming is, what its capacities are, and when and where it is being put out into the world.  Once the door is opened and the “human out of the loop” robotic device starts moving among us, there is no more direct human control.

The civil liberties critics note that military drones outside of the United States in fact hurt our national security and our moral standing, but also observe a linkage between non-U.S.-deployed drones with military application and impact within the United States.

The Department of Defense “passes down its old stuff to its little siblings,” which means that DOD gifts military equipment to domestic law enforcement agencies without charge.  A primary recipient is the much-criticized Department of Homeland Security.  Indeed, public records suggest that Department of Homeland Security drones are ready to be equipped with weapons, although the Department claims that currently all their drones are unarmed.  (Source: column by Kade Crockford in the online “The Guardian,” as guest blogger for Glenn Greenwald, March 5, 2013).

The surveillance accompanying the domestic use of drones presents problems under the Fourth Amendment to the Constitution, which holds people secure from unreasonable search or seizure.  Obviously you don’t issue to a drone a warrant to spy upon all of us.  Beyond the Fourth Amendment argument, to the extent there is a right of privacy under United States law, drones negatively impact that privacy.  The public has a right to know what rules our governments are bound by, in the utilization of machines to spy upon us, and ultimately (with autonomous machines) to police us.

We are tracked by license plates, by cell phones, by iris scans.  (We are told that these are no different from a fingerprint, although a fingerprint or DNA swab is obtained after there has been an alleged criminal act while machine surveillance is by definition “pre-act;” see particularly the Spielberg movie “Minority Report” in which the government arrests people in advance of the crimes they will ultimately be committing).

Twenty states are considering legislation to limit the use of domestic drones, including Massachusetts.  Certain cities also are taking action.  The focus is privacy and freedom from unreasonable search, on constitutional grounds.  As it is clear that some drones shortly (if they are not already) will be autonomous and will function as machines with “human out of the loop” capacity, the world must evolve towards imposing functional controls on the use of drones.

Killer Robots

The most comprehensive and cogent articulation of the legal issues presented by autonomous machines is contained in a report by the International Human Rights Clinic, part of the Human Rights program at Harvard Law School.  This November, 2012 “Report” is entitled “Losing Humanity-the Case Against Killer Robots.”

The central theme of the Report is that military and robotics experts expect that fully autonomous machines could be developed within the next twenty to thirty years.  As the level of  human supervision over these machines decreases, what laws should be enacted to protect people from actions committed by these machines?  Although the focus of the Report is primarily military (not just drones; robotic border guards, for example, are given attention), it is important to remember that a drone is a robot is a machine; the law should develop the same, whether we choose to package the problem machine into something that looks like a small airplane or something that looks like you and me.

Where does the Report come out?  It concludes that there is no amount of programming, artificial intelligence or any other possible control of a fully autonomous machine that can mimic human thought sufficiently so as to give us the kinds of controls that the “dictates of public conscience” provide to human operators.  All governments should ban the development and production of fully autonomous “weapons;” technology moving toward autonomy of machines should be reviewed at the earliest possible stage to make sure there is no slippage; and roboticists and robotics manufacturers should establish a professional code of conduct consistent with insuring that legal and ethical concerns are met.

The focus of the Report is primarily military.  I suggest that similar kinds of thinking and constraints have to be applied toward what we commonly call “robots” and particularly toward the human-like robots we will tend to surround ourselves with, because a truly autonomous machine is just that: a machine that can make mistakes un-mediated by human controls.

International Law re Weapons

There is international law concerning the utilization of all weapons.  Article 36 of Additional Protocol I to the Geneva Conventions places upon a country developing new weaponry an obligation to determine whether its employment, in some or all circumstances, would violate international law.  In commentary, it is noted that autonomous machines by definition take human beings out of the loop and we run the risk of being mastered by the technology we have deployed (remember in the Terminator when Sky-Net became “self-aware”).

Particularly addressing the proper line of thought, which is to view airplane-like drones and humanized robots the same, the Report states (at 23): “reviews [of nascent technology] should also be sensitive to the fact that some robotic technology, while not inherently harmful, has the potential one day to be weaponized.  As soon as such robots are weaponized, states should initiate their regular, rigorous review process.”

There is much discussion that autonomous robots will be unable to distinguish between civilian population and combatants, and risk harming civilians in violation of the Geneva Convention.  Particular sensitivity to this risk is raised by the so-called Martens Clause, which is actually over a century old and derived from prior international conventions; it charges governments with complying not only with international law but also with “established custom, from the principles of humanity and from the dictates of public conscience.”

How would a machine comply with the Geneva Convention?  It would have to be programmed to recognize international humanitarian law as subtly articulated in various sources, including the Geneva Conventions and “principles of humanity and. . . . dictates of public conscience.”  It would have to determine whether a particular action is prohibited.  It would then have to determine whether such action, if permissible by law, is also permissible under its operational orders (its mission).  It would have to determine whether, in a military setting, a given action met the standard of “proportionality” of response.  It would need to use an algorithm that combines statistical data with “incoming perceptual information” to evaluate a proposed strike on utilitarian grounds.  A machine could act only if it found that action satisfied all ethical constraints, minimized collateral damage and was necessary from the mission standpoint.

It is argued that machines might be able to apply these standards better than human beings, because human beings can be carried away by emotion, while machines cannot.  This is balancing.  There is discussion as to whether artificial intelligence can provide sufficient cognitive and judgmental powers to machines to approximate human capacities in this area.  The Report concludes that this is impossible.  The Report concludes that we are much safer having a human being either “in the loop” or “on the loop;” in the words of robotist Noel Sharkey: “humans understand one another in a way that machines cannot.  Cues can be very subtle, and there are an infinite number of circumstances. . . .”  Remember the computer that ran out of control in the movie War Games and almost set off global thermo-nuclear war?  That was a pretty smart and autonomous machine.  Remember the machines that set off thermo-nuclear war in Dr. Strangelove?  How smart can our machines be?  How much risk can we absorb?

The Report also notes that fully autonomous machines would “be perfect tools of repression for autocrats seeking to seize or regain power.”  Not a friendly thought.  The Report concludes that we should not develop or permit machines which are autonomous weapons.

Peaceful Robots

The same thinking flops over into those machines we call “robots” which one day will be moving among us, bearing substantially human form and preprogrammed “personalities.”  Let us say a robot programmed to provide medical aid is accidently impeded by a human being and makes the judgment that its mission to provide immediate medical assistance leads to the judgment of eliminating the intervening human being.  Let us assume that a robot (or a drone) sees two children carrying realistic toy guns and running toward a sensitive location, chased by a mother calling out “Harry, Joe, please stop, you know I don’t like seeing you play with guns.”  What if the machine misreads that situation in a way that a human being would not?

I submit that a legal system must be imposed with respect to all autonomous machines.

Private Law

Our discussion until now has addressed what I will call public law; what international law and constitutional law ought to do with respect to the control or prohibition of dangerous autonomous machines?  What about private law, or the financial liability that should be ascribed in courts when a machine runs amuck?

We currently have private tort liability laws.  These laws generally provide that a manufacturer is held strictly liable for any damage by a machine that it produces and that is inherently dangerous.  An injured party need not prove negligence of any sort; one just proves the dangerous machine was manufactured by the company.  Such a legal rule creates a wide variety of problems with autonomous machines.

First, strict liability doesn’t make much since when we are talking about machines that are wholly autonomous by definition.  Furthermore, no manufacturer would produce any autonomous machine if this were the rule of law.

Next, who is the manufacturer?  Is it the person who does the nuts and bolts?  Is it the person who does the programming?  Is it the person, the ultimate user, who defines the parameters of functionality (mission) of this combination of nuts, bolts and programs?  Or, is it not logical to hold responsible the last human being, or the employer of the last human being, who turns on the switch that permits an autonomous machine to move out into the world?

Alternately, if there is a problem with a machine, should we actually look to see if there is a design flaw in the manufacturing or in the programming?  This is different from affixing absolute liability, without such inquiry, on the theory that it is an inherently dangerous device.  How many resources would it take in a sophisticated autonomous device to answer that question?

What do you do with the machine itself? Destroy it?  Destroy all similar machines?

What do you do about the potential monetary liability of governments?  For example, our federal government is immune from being sued on a tort theory for any accident that is occasioned during the exercise of governmental powers.  Would this rule not automatically take the federal government and all its agencies off the hook if it sends out into the world a machine that kills or creates damage as part of the discharge of its governmental functions?

Again, the Report concludes that you simply must not develop autonomous weapons, and that you must prohibit governments and manufacturers from doing so.  If that be the rule, I am suggesting that we understand that there is virtually no step between a drone/weapon and a human-appearing robot with capacity to do significant harm.

Finally, to the extent we do in fact end up with autonomous machines flying over our heads, or standing next to us at the bar and ordering an IPA (even though the beer will drain into a metallic waste disposal stomach), what should we do about the private ordering of the law?  I believe that the United States government should establish an insurance program, funded by manufacturers, programmers, designers, and all utilizers of autonomous and semi-autonomous devices, to provide “no fault coverage” as an efficient method of dealing with the liabilities that all of us in society are willingly creating, as our science moves forward without regard to the antiquity of our thinking both with respect to public law and private law.

What is a Board Crisis Plan?

If you are sitting on a board of directors, does your board know the difference between a “emergency plan” and a “crisis plan?”

An emergency plan is the information about immediate response; the phone numbers to call if there is an accident, the phone numbers of the lawyers, the phone numbers of the police and fire department, the phone numbers of your applicable regulatory agencies, the phone numbers of your PR people and insurance people,  and your key employees and your directors.  The emergency plan tells you who does, and who does not, speak to the press.  The emergency event is of short duration, and management needs  to coordinate the moving parts in order to get through it.

A crisis is more enduring.  A crisis is the fall-out, the follow-on of an emergency.  Has the crisis physically destroyed your company or an important part of it?  Has it cut off a vital source of supply?  Has it materially undermined the reputation of your company?  Has it knocked the dickens out of your stock price?  Has it attracted litigation from shareholders, claiming that directors and management should have been attentive enough to avoid the emergency, and the ensuing crisis, in the first place?

The proper role of the board of directors, if faced with a corporate crisis, was the subject of discussion at the March 12th breakfast meeting of the National Association of Corporate Directors/New England.  The panel, consisting of three directors and a crisis management consultant, generated several useful takeaways for the board:

  • A crisis needs to be worked through step by step and may last a long time; this is the job of management;
  • Although boards of directors sometimes include a lot of “type A personalities” who want to “do something,” the rule for good directorship remains: in most instances, it is “eyes in/hands out;”
  • The role of the board of directors is to make sure that a company’s management has an emergency plan and has a crisis plan;
  • It may be appropriate to test the plans by role play, or to have them evaluated by a third party;
  • The crisis plan should have instructions for the board, including identification of roles, handling of press and other inquiries, centralized control of the flow of information, and understanding of what may happen in case of shareholder litigation or whistle-blowers;
  • The plan should provide for robust protections for personal electronic devices of directors and key people, as an ongoing crisis invites hackers to attempt to break into the information flow;
  • The plan should identify pre-established relationships with law firms having both civil defense and criminal law capacity (“in the middle of a crisis is no time to go looking for your lawyer”);
  • The plan should address the situation wherein the CEO or other key members of management are themselves the cause of the crisis; who steps up and takes what role, and does the board at that point become more proactive?
  • Speaking of lawyers, there is danger in having a lawyer be your corporate spokesperson in a crisis; it may undermine the legal communication privilege between management and counsel, and additionally the various stake-holders (employees, shareholders, investors, the community) want to hear directly from the CEO and not from an intermediary;
  • Although this may be difficult to achieve, another role of the board is to make sure that the executive staff is competent to act in crisis mode (not just in the role of building a business in normal times);
  • However, one director observed that crisis may bring out the best and the worst in people, and it is very hard to know who will rally, and who will not rally, in the face of corporate adversity.

It was also noted that emergency planning and crisis planning may not be favored by boards; the risks appear remote, it is a discussion of negative contingencies which occupies time and burns money and does not build the business, and there is pressure these days for robust enterprise risk management (designed to eliminate the emergency which gives rise to a crisis in the first instance).

Discussion touched upon a wide variety of emergencies creating a crisis for a company: explosion of a manufacturing facility, materially adverse press coverage, violation of laws including the Foreign Corrupt Practices Act, personal impropriety by the CEO just to name a few.

A Machine by any other Name…

By and large, drones look like drones.  They are small airplanes, helicopters, missiles.  Where there is an exception (see the photographs of bird-like and insect-like drones in the March, 2013 National Geographic), they nonetheless do not look like human beings at all.  And, drones do not have human-like personalities.

No so for those machines we commonly call “robots.”  Some do look like (and in fact are) vacuum cleaners; some do work on assembly lines and look just like the machines they are; some are wholly functional in appearance, for example those with military applications (e.g., the so-called “sentry robots” utilized to police the Korean demilitarized zone).

But by and large, human beings try to make robots look like human beings, or at least act like them.

Humanizing Machines

It is beyond my ability to know why this is so.  I speculate that as robots come into greater contact with human beings and fulfill more human-like functions, we feel more comfortable trying to remake them entirely.  This urge to the anthropomorphic is deeply rooted in one of our greatest cultural educators: the movies.

I first noticed this irresistible urge to humanize robots while working on a case about fifteen years ago in Pittsburgh.  A robotics team was working on a device that would provide automated directions in museums, airports and other public spaces.  The functionality of the robot had been easily established.  Its voice recognition functions were robust.  However, tremendous effort was being made to imbue this machine with human-like attributes: first, it had to look like a human being; second, it needed a sense of humor and a touch of sarcasm in its pre-programed patter in order to satisfy the designers (and presumably the customers).

The fourth post in this series makes the argument that all machines (drones, robots or whatever we call them) should be subject to the same system of law.  This becomes more important, the more “autonomous” the function of that machine becomes.  By autonomous, in this context, we mean that the machine once deployed by human beings makes its own decisions.  A machine that cannot make its own decisions, or a machine that has the ultimate decision making power reserved to a human being who chooses to push or not push the button, is not the kind of machine we are talking about.

The argument against giving machines total autonomy is that they lack requisite controls, in order to provide the “human element” in the decisional process.  It is thought by many that it is impossible to install the complexity of human judgment, entwined with emotion, into a machine, and that such conclusion should be reflected in the laws that will control liability for errant machines.

I am fearful, however, that we will end up with to different systems of law relating to machines that are improperly categorized as different: drones vs. “robots.”

Are You Ready for Your Close-up, R2D2?

The reason is that we are acculturated to view robots differently, substantially by reason of the movies.  A brief anecdotal summary follows.

We start with the Terminator series.  Theoretically emotionless, ultimately the reformed principal terminator (Schwarzenegger character) is taught emotion and compassion, primarily by a child.  It should be noted that every terminator, good and evil, looks just exactly like a human being.  These machines don’t look like machines.  They look like, and we are invited to relate to them as if they were, human beings.

In the classic Blade Runner movie, it is virtually impossible to distinguish between the robots (“skin jobs” in the movie nomenclature) and real human beings.  The principal robotic protagonist, who saves the life of the hero at the last moment even though they have been locked in mortal combat, is perceived as having learned to revere life itself and, as its dying act, chooses not to take the life of another.  The female “lead” skin job, a rather beautiful young woman, ends up running away with the hero.  The hero knows she is a skin job, and his prior job was to kill skin jobs, yet he becomes so emotionally involved that they end up as a couple, literally flying off into a sun drenched Eden.

In the movie Artificial Intelligence, the young robot is embedded in a family and shows true emotion because he is discriminated against and dis-trusted for his robototism.  The tears look real; the crier is nonetheless merely a machine.

Even when movie robots are not forced to look like human beings, we feel compelled to instill in them human emotions or patterns.  In the Star Wars movies, the non-human-looking robot R2D2 is given human personality and human reactions.  Even the ultimate disembodied robot, Hal in 2001 – a Space Odyssey, ends up humanized.  The disembodied Hal (the computer built into the space vehicle itself, with no separate identifiable physical attributes) has gone rogue and must be unplugged.  As Dave decommissions Hal by pulling out his circuits, one by one, Hal’s unemotional voice takes on a human tone, and the lines given to Hal as he is slowly disconnected are pointedly emotional: “Dave, I don’t feel very well;” “Mary had a little lamb [singing]” near the end of his total disconnection.

The closer we engineer robots to seem human, the more likely we are to view them ashuman.  If this leakage of perception pours over into our legal system, creating dual views of what is “just,” making a distinction between a flying robot that looks like an airplane and carries a warhead, on the one hand, and a “skin job” who serves us food and babysits our children on the other, we will be missing a key perceptual element which is a pre-cursor of an appropriate legal system.  We will be forgetting that they are all simply machines.

The rubber hits the road, in terms of legal systems, as we move to what is known as “autonomous” machines.  A machine which is without ongoing human direction, and which is permitted to make its own “decisions,” will put to a test our ability to remember that the robot and the drone are the same; we call the drone an “it” and we have a tendency to call the robot a “he” or a “she.”  The hoped-for take away from this third post is the following: the robot is an “it,” just like the self-directed missile.

Drones, Robots, Laws and Analytical Confusions

This post is the first of four which in the aggregate address the “drone/robot” issue.  What issue, you ask?

The popular press is saturated with discussion of drones.  The discussion is ubiquitous.  It is framed in terms of rights of privacy, constitutional rights, humanitarian considerations, risks of dictatorship and the definition of proper foreign policy.  We will quickly albeit anecdotally survey this discussion in the next (second) post.

The third post traces the artificial dichotomy between drones and robots, a culturally driven distinction that causes us to fail to engage  the legal and moral issues presented by each, which are the same.  I propose an anecdotal approach to understanding the role of movies in creating this false cultural distinction.

The fourth and last post touches upon some of the legal issues presented by drones and robots.  The discussions generally track the artificial division noted above, but at base present the same issues of law: how does domestic law address robots which injure and kill; how does international law address the robots we call drones which injure and kill; what if any global prohibitions should we attempt to impose to prevent autonomy of action on the part of machines?

You should note that all domestic and international lawyers considering these matters (at least, all I have read) believe that current laws are wholly out of tune with 21st century issues such as these.  I suggest that few observers are framing the matter in terms of a common definition:  what is the “law” of machines that have functional autonomy.

These posts are not  final or even detailed analyses.  Hopefully they will serve to foster discussion, not attract mere critique.  We have a common problem here of the most fascinating kind: science, human nature and the law stand yet again at radically different evolutionary places.  As the science seems unstoppable, human beings and their legal systems better start thinking about these matter right now.

Drone Court

In the politically sensitive comic strip Prickly City a few days ago, a small drone is seen chasing a coyote across a vaguely desert-like terrain.  The coyote complains in effect “I know I don’t have identification papers but I’m a coyote.  I COME from here.”  The drone unthinkingly continues its pursuit.

The March 7 strip finds a conversation about the propriety of such use of drones.  The protagonist objects that there is no due process or rule of law in sending drones after people and demands “protections to make sure you don’t just drone people because you don’t like them.”  The response is that indeed such protections exist: “Drone Court.”

The morning papers of the same date carry news of Senator Rand Paul filibustering Obama’s designee as CIA chief, John Brennan, until the administration commits to never using drones to kill noncombatant Americans.

Press and television coverage has for many months been saturated with stories of the use of drones in the war against terror, although these drones seem to be of the non-autonomous variety; their deployment and functions seem to be controlled by human beings although at remote locations.

The current (March, 2013) issue of National Geographic carries a surreal article, replete with creepy pictures of creepy drones in the form of moths and hummingbirds, entitled “The Drones Come Home.”  Noting that Obama signed a law last year that requires the FAA to open US airspace to drones by September 30, 2015, the article traces the discrete but growing use of what are seemingly unarmed but spying drones by certain State, country and federal (CIA) governmental agencies.

The Boston Globe of Sunday, March 3, Section K (“Ideas” is the name of that section), leads with the following headline: “ROBOTS ON TRIAL—As machines get smarter – and sometimes cause harm – we’re going to need a legal system that can handle them.”  In one of the few articles I have seen that appropriately ignores the false distinction between robots and drones, we learn a lot about the ubiquitous nature of the present dialog about machine liability:  Harvard Law has a course on “robot rights” (leave it to Harvard to frame everything in terms of inherent rights), many universities host conferences on robotic issues, numerous books are being written (look for Gariel Hallevy’s upcoming “When Robots Kill,” and the more austerely titled book by philosophy professor Samir Chopra entitled “A Legal Theory for Autonomous Artificial Agents”).

My purpose here is to highlight what I consider to be the underappreciated dialog being conducted about THE central issue here: what system of laws ought to be applied to machines when outside human control.  Some of the popular dialog focuses on “robots” and some on “drones” but such a distinction interferes with a proper analysis: we have machines here that can kill or cause harm accidentally or on purpose.  Do you take the machine to Drone Court, as suggested by Prickly City today, or do you take the manufacturer, or the last human to set the machine on its course, out to the tool shed and tan its corporate or personal hide?

The next post, to follow in the next few days, will detour into what I maintain is the diversion caused by our cultural anthropomorphization of the machines we call “robots” and its possible ramification in the way in which we end up treating autonomous  non-human-controlled airplanes, cars, border guards, household servants and electronic girlfriends—all of which should be treated exactly the same because they are all just alloys, motors and computer chips.

Let the Sun Shine In

Many of us are vaguely aware of the so-called Sunshine Act, part of the ObamaCare legislation.  Briefly put, this Federal statute requires manufacturers of medical devices and pharmaceuticals to make public disclosure of many payments to licensed physicians and to academic research hospitals (but not other hospitals).

The statute is enormously complex; it is one of those statutes that, regrettably, likely requires manufacturers, physicians and research hospitals to seek the advice of counsel.  Final rules were promulgated by the Centers for Medicare and Medicaid Services (“CMS”) earlier this month.  In very broad outline:

  • Covered Manufacturers must start collecting data as of August 1st of this year.
  • By March 31, 2014, and for every full year thereafter, a covered manufacturer must provide information about payments to CMS.
  • Annually CMS will publish a report naming those people who paid monies to physicians and academic hospitals in connection with any “Covered Product,” for the whole world to see.

There are granular and specific rules concerning Group Purchasing Organizations (“GPOs”) and physician owned distributors (“PODs”).  And with respect to all else, the devil is in the details.

Covered Products include any drug, device, biologic or medical supply with respect to which there is “available” payments under Medicare, Medicaid or CHIPs (Children’s Health Insurance Programs).  Covered Manufacturers do not include either distributors or wholesalers who don’t hold title to Covered Products, or manufacturers for internal use only (including use by the manufacturer with its own patients).

There must be reporting of most payments or other transfers of value, and value is measured as “on the open market.”  Payments may be made directly by a manufacturer, or through a third party instructed or controlled by a manufacturer; but manufacturers need report only those things of which they have actual or constructive knowledge.

There are some rather bizarre exceptions for providing things of value worth less than $10; picture a medical conference where a manufacturer can provide a tuna fish sandwich but not a lobster roll.  There are also exceptions for certain but not all educational materials, certain but not all product samples (or coupons for product samples) if utilized by patients, and for discounts or rebates on products purchased by doctors.  There are also rules concerning how manufacturers can avoid reporting while sponsoring  accredited or certified continuing medical education programs under the auspices of five specific professional organizations (such as the American Medical Association).

The statutory purpose is to bring  into daylight the benefits that manufacturers may provide  to physicians in order to induce physicians to utilize products of those manufacturers.  This is a reporting statute; nothing is banned.  It is designed to reach all sorts of benefits provided to doctors, whether they are payments, funding of projects, grants of stock or options or other ownership evidences, lavish travel or meals, whatever.

As you might have imagined, both manufacturers and physicians are going to want to take a careful look at the information that will be published about them.  The statute provides a forty-five day period for review of proposed publications during which manufacturers, physicians and academic hospitals may review the text of proposed publication, followed by a fifteen day period to object.  If the objection cannot be resolved, the information will be published anyway (with a footnote saying that it is disputed).

Certain but not all provisions of state laws now on the books are pre-empted by this statute.  Other provisions are not.  Any given situation in any given state must be analyzed on its own merits.  In addition to the survival of certain but by not all state regulation of similar import, the anti-kickback laws continue to apply in parallel, as do other Federal laws such as the Stark Act.

Lots of information and forms can be obtained at www.cms.gov/regulations-and-guidance/legislation/national-physician-payment-transparency-program/index.html. 

Covered Manufacturers and potentially covered manufacturers need to start now to put into place data gathering procedures and technologies so as to be ready to comply starting this August.  Good luck to all.

The Craft Beer Business

CEO and Harpoon Brewery founder Rich Doyle shared his marketing plans at the February 14th breakfast at the Association for Corporate Growth/Boston.  Plans include:

  • Aggressively promoting his new Boston beer garden and restaurant both for the general public and for private groups, with a focus on trying to identify Harpoon Brewery with the Boston experience.  He expects 200,000 visitors in the Boston Brewery in the next year; their Brewery in Vermont, at the old Catamount facility, also draws about 10,000 visitors a month.

 

  • Use of social media; the company has hired a new director of digital marketing and his task is to replace an email approach with a Twitter/Facebook approach.

 

  • Beer Fests: started around 1990, a low point in the company history where they were down to five employees, Beer Fest each year now attracts about 18,000 visitors.

 

  • Local involvement in charitable events and clean-up drives, with respect to which the company reaches out to its consumer base for volunteer participants.

 

  • Publicity within the consumer base, which is organized by, and communicated with by reference to, zip codes.

Harpoon is the only Boston based and Boston manufactured craft beer.  Craft beers make up about 6% of the United States market and Harpoon claims to be the eighth largest among these smaller brewers.  With sales about $50,000,000 last year, Harpoon distributes through 26 states as far West as Texas; its growth plans do not include making significant changes in either their product line or their geographic distribution, as they see ample growth opportunities within their existing markets and product lines.

One interesting fact for beer drinkers: while bottles for craft beer presently are and will remain far more significant than aluminum cans, Doyle anticipates growth in canned distribution because cans are more mobile, and furthermore are more popular in the South than in the North.

Doyle is a reformed New York investment banker who got the idea to start a brewery by writing a brewery business plan as part of his MBA requirement.  Throughout his presentation, he took long drinks – from a water bottle.  But then again, it was a breakfast meeting.

Tall Pygmies and CEO Succession

“Never measure the tallest of the pygmies.”

This advice comes from George Davis, Boston Managing Partner for the national search firm Egon Zehnder, commenting on the appropriate way to search for a CEO following a corporate merger.  Since CEOs should be selected based upon whether they fulfill future requirements for skill set and experience, it is possible that the existing CEOs and other C level executives of both constituent companies in a merger are not “tall enough”  to meet that standard; consideration should be given to looking to an outside player.   One must avoid the “brokered deal” where, for example, one senior position is given to the CEO of one constituent, another C level position to the CEO of the other constituent.

Davis spoke at the February 12th meeting of the National Association of Corporate Directors/New England, which meeting focused on the role of directors in CEO succession.  Other major takeaways:

  • It is appropriate for the board to ask of a CEO: “what do you want from your board of directors that you are not getting?”  (Pamela Godwin, President of Change Partners of Philadelphia and board member of Unum Group [NYSE]).

 

  • Boards should be  proactive in driving  CEO succession planning, and one device to consider is an educational/training session for the board, during a retreat, where lawyers or HR professionals discuss the fiduciary role of directors in planning succession.

 

  • The job description for a CEO should be updated annually to reflect strategic changes in a company’s business which may alter the criteria for the best choice of CEO (William Messenger, Director at ArQule, Inc.).

 

Since changing times may require changing strategy, to be implemented by a new CEO from outside the organization, it is essential prior to the arrival of that new CEO for the board to make sure that the executive ranks are educated as to these new challenges and the appropriateness of the changes that a new CEO may bring (Ellen Zane, CEO Emeritus of Tufts Medical Center).

Over the last five years, the average tenure of a CEO (based on a survey of many public and private companies) has shrunk from 7.3 years to 4.4 years.  Thus, focus on CEO succession is becoming more important.  Half the members of boards surveyed believe that their succession planning is inadequate; only 33% have a well-documented succession planning process.

The panel also agreed on the need to train internal people as possible CEO successors.  Although internal executives may not make it to CEO, a plan to rotate them through different functions and (if they are board members themselves) different committees should be presented to them  as building their own professional skills, and not as a step in a “horse race” to the top.  The panel also agreed that CEO selection is the task of the board, but that filling slots below the CEO level is the role of the CEO, with the board asking appropriate questions to make sure that the task is being handled properly.

Corporate Minutes as Trojan Horse

If you are involved with corporate governance, if you sit on any board including a non-profit, if you are a fan of keeping complete minutes of meetings as an archival record of what was discussed and decided– I suggest you read my current article about how to properly record meeting minutes.  It ain’t so easy as you might think….  The article appears on page 14 of the current issue of Mass Lawyers Weekly and shortly will be linked to my bio on the firm website but if you are interested in an advance copy please just send me an email.

The answer to good corporate minutes, by the way, is brevity while describing the process of discussion, not recording the substance of that discussion.

Update on Foreign Corrupt Practices Act

You might want to click to my recent article on the FCPA, which is rapidly becoming a huge problem for companies doing business overseas.  It is possible for a US company to get into deep financial trouble by reason of the actions of its sales reps and overseas partners; suggestions as to risk containment are included in text.  Enjoy.