Racial Justice in the Supreme Court this Term

Two pending cases in the United States Supreme Court, to be decided by the end of this June, involve issues of racial justice.

One case challenges the affirmative action plan for admissions at the University of Texas.  Many lawyers thought that several years ago a five-to-four Supreme Court decision upholding the University of Michigan admission plan resolved all these kinds of cases.  Apparently not.

The Michigan plan, approved by the Court, stated that universities could utilize race as one factor, although not the determinative, in establishing standards for admission.  The justification was that the university had a legitimate interest in creating a diversified student body, believing that such diversity created a better educational experience.

The University of Texas had in effect a “top 10% rule” which automatically admitted to its Austin Campus the top 10% of graduates in all Texas public high schools.  Since many Texas high schools werede facto racially segregated, this resulted in substantial student body diversity.  Texas now was intending to graft on top of its 10% rule a provision, affecting the recruiting of the balance of its entering class, using the Michigan system (race being one but not the only factor).

The case may even be thrown out; the plaintiff, a white student claiming she was discriminated against, has already graduated from another college and the only remedy she is seeking is the return of her application fee (a fee which the University of Texas never refunds in any case, whether or not an applicant is admitted or denied admission for any reason whatsoever).

The second case, Shelby County against (United States Attorney General) Holder, involves Section 5 of the 1965 Voting Rights Act, which identifies certain states as having bad racial history.  Any change in the voting laws of those listed states cannot be given effect unless the United States Department of Justice approves.  The Congress has five times extended the Act but has never once amended the list of states which are subject to this mandatory review.  The plaintiffs argue with some cogency: is it really possible that in the last forty- plus years, since the enactment of the original Voting Rights law, there has been no change whatsoever in any of the states subject to this pre-clearance procedure?

Does not a failure to make specific findings on a state- by- state basis, identifying anew those states which still require such scrutiny, prove that Congress is simply renewing the statute without a rational federal interest which would justify intrusion into state election laws?

Gay Marriage and the Supreme Court

Between now and the end of the Supreme Court Term in late June, the Supreme Court will be deciding two cases involving gay marriage.

One case is a challenge to California’s Proposition 8, and raises the question whether the United States Constitution prohibits a state from limiting marriage only to couples of the opposite sex.  The decision might result in a fifty state ban on states prohibiting gay marriage, or an eight state ban for those states (including California) that have granted to gay couples all legal rights of marriage except for the “title,” or a one state ban applying only to California (which had at one time permitted gay marriage and now proposes rescinding that right).  There is also a possibility that the case will not be decided on procedural grounds.

The second case involves the much- publicized claim by a surviving gay spouse that the Defense of Marriage Act (DOMA) is unconstitutional.  This federal statute, signed by President Clinton, as a matter of federal law only recognizes marriage between couples of the opposite sex, even if they live in states that permit same sex marriage.  This affects entitlement of gay couples to approximately eleven hundred federal programs.

The Obama administration is not defending DOMA.  Five members of the House of Representatives have hired a law firm to mount that defense.  This presents a substantial question of standing; could all of Congress, or one House of Congress, or five mere members of one House of Congress, undertake the defense of a federal statute where the government itself, through the attorney general, has declined to defend?

Both cases raise the following issue:  what is the legal standard to be applied to a law that is on its face discriminatory?  How closely do you have to look at the governmental justification for that law?  Cases involving discrimination based upon sexuality generally present the lowest standard; government, to support such a law, must only prove that there is some rational basis for the discrimination.  Tougher standards are imposed on governments where, for example, laws make distinctions based upon race.

Transforming Large Corporations (about Hewlitt Packard)

How do you turn around a large company?  According to Mohamad Ali, Chief Strategy Officer at Hewlett Packard, by first attacking costs and then strategically reinvesting the enhanced cash flow in profitable product initiatives.  You also divest your extraneous parts, but you do that over a long period of time.

Ali, who has spent many years in strategic planning at IBM, told a breakfast meeting of the Boston Chapter of the Association for Corporate Growth some war stories about the crises faced by IBM in the ‘90s and into the 2000s.

Ali has worked at several companies doing the same thing: guiding them through corporate transformations.  It is not surprising that all these companies are in some phase of computer technology; aside from being Ali’s substantive sweet spot, computer-related companies have faced a constant need for transformation, as technological and consumer changes occur often and are abrupt in this space.

Ali has a “transformation playbook” which has some simple sequential steps. The first always is: “take cost out of the system.”  That gives you cash and if you don’t have cash, you have no flexibility for products, sales or M&A.  You also have to take a look at governance structure to make sure that that structure matches the new configuration of your businesses.

This transformation playbook apparently is in use today at HP.  Having gone through four CEOs in four years, and having purchasing Palm and other companies without measurable affirmative effect, HP did attack cost and has been working on products in markets that it missed historically, including tablets.

We are early in the HP “transformation” and Ali is of course optimistic as to the ultimate results, speaking glowingly of product innovations.  It remains to be seen whether HP’s “transformation playbook” will produce any silver lining.

The Begelman Case

Begelman is not an iconic name.  Indeed, he is a Florida retiree.  But he is illustrative of the fine tooth comb that now runs through the hair of our trading experience in public shares.

The story is short and easy to understand: at a group gathering he learned of confidential information about a merger, he traded on the information and made money in 2011.  He had no relationship with the company involved, or the proposed transaction.  Then he got caught.

What is interesting are the numbers; his ill-gotteng gain was less than $15,000 on a 25,000 share trade.  What that tells us is this: the FINRA and SEC will focus on small, very small infractions of insider trading.  Nothing is too small a nit to avoid being caught in the regulatory comb.  In an age of huge numbers, huge deals and huge fines, no one should assume that the little violations will go undiscovered.

You might want to look at the SEC release from April 22, and see if you think Begelman deserved the haircut he got.  See http://www.sec.gov/news/press/2013/2013-66.htm.

Stones in the (TD) Garden

It seems that the Rolling Stones will roll into Boston Garden this June.  Balcony seats are far north of $400 each; floor seats are officially priced at $600 but if you go on line you will see single seats (not even in a suite) priced up to $5,900.  But don’t blame TD Garden; they are the venue, but the tour promoters fix the price.

At the April 11th breakfast meeting of Association for Corporate Growth/Boston, TD Garden President Amy Latimer attempted to share some of the background of TD Garden as a business.

Ticket prices for all events, whether the Bruins or the Celtics or the Circus or the twenty-five or so concerts that play the Garden each year, are fixed by, and paid over to, the teams or promoters.  It seems that TD Garden makes its money out of a rental of the Garden, and its ownership of the food services.  Aside from employees having to work very long and odd hours (most events go into the night and are typically attended by much of the staff), TD Garden sounds much like any other large business: their departments include HR, legal, sales and marketing.  Indeed, Latimer advises that TD Garden is very much like any business but with a couple of exceptions:

  • “We do not control our product” which is to say they receive a quixotic mix of concerts and cannot guaranty the quality of the athletic contests; and
  • Much of their schedule is filled at the last minute, which requires flexibility in every part of the organization, but particularly marketing and ticket sales.

Changes for the future at TD Garden?

  • They are moving toward “variable pricing” which is to say that tickets will be priced not only based on location but also on the quality of the team being played, the day of the week, etc.
  • Printed tickets will be a thing of the past; admission will be by identity card with bar code, and if you provide a code to a client as your guest, you will also be able to “load” value into the card so that you can treat your guest to food and merchandise without being present.
  • After an almost twenty year delay, TD Garden claims to be moving forward on the development of the open parking lot in front of the Garden, planning office space, a hotel, perhaps a grocery, and a new front door to enter the Garden (the currently used side doors, shared with commuters, were originally conceptualized as only serving the train station).
  • TD Garden is working like crazy to install not only cell phone service but also WiFi capability.  Among other things, that capability will drive a variety of apps including replays, food ordering for delivery to your seats, and merchandise ordering.

Camels: Three Vignettes

ONE: It is 1959 and I am newly settled into the dormitory at Columbia College in New York City.  I am sixteen and very excited.  I go downstairs, cross Broadway to the smoke shop and ask for a pack of cigarettes.  I am almost stymied when asked which brand; who thought of that?  I blurt out “Camels,” no doubt a triumph of cumulative advertising.  I go up to my desk and carefully light one.  The bits of tobacco stick on my tongue.  The paper wrapper gets wet with my saliva as I cannot keep my mouth dry.  The ash ascends to my eyes and I cannot read my assignment.  I throw out the pack of Camels and next day buy a pipe.

TWO: It is about 2000.  I am fifty-eight and very excited.  I am standing on the main street (only street) of Timbuktu watching a nomad in flowing blue gown sitting tall atop a huge yellow camel.  The combined shadow must be twenty yards long, it crosses the road and makes a right angle turn up the side of a mud building.  The wind at his back blows puffs of sand between the camel’s legs and adds to the growing mini-piles of desert clustering in the corners of the buildings, attempting to erase the street itself.  The rider looks down with what I interpret as scorn, his dark eyes glowing out between the bright blue neck scarf and the bright blue turban.  The rider kicks the camel’s flanks and the animal slowly moves past me.  There is a strong animal odor.  I do not know if it is the camel or the rider.

THREE: It is today.  I am seventy and very excited.  I see a blurb on the front page of the Wall Street Journal and turn to page A9 for an article that I hope will interest me.  The government of Mali had given the French President a camel in February as thanks for assistance in fighting the Islamist rebels.  It seems that Hollande left his camel in the care of a Timbuktu family, which promptly killed and ate it.  The replacement camel will be shipped directly to France, where I suspect traditional French cuisine will eschew its meat and allow the animal to live peacefully somewhere in Paris.  I want to go to Paris and see this camel.  I’d walk a mile for a camel….

Board Role in Strategy

According to a recent McKinsey survey of 1,600 corporate directors, only 20% said they fully understood the strategy of the company upon whose board they sat.  Perhaps not surprising, as only 10% said they had a full understanding of the industry in which that company competed.

What are the techniques that will permit boards to engage strategy, rather than spending their time on compliance?  This question was explored by a panel at the April 9th Breakfast Meeting of the New England Chapter, National Association of Corporate Directors.

The panel (moderated by Willow Shire, independent director at TJX) included Jeffrey Naylor (Senior Executive VP at TJX), serial entrepreneur and angel investor Jean Hammond, and McKinsey director Jack Welch; they suggested:

  • Educate yourself about your industry and company; this requires a CEO who will “let you in” sufficiently to get what you need.
  • Regularly schedule board time to discuss broader trends; many boards feel restricted to passing upon strategic implementation rather than being able to discuss overall strategy.
  • Push compliance issues down to committees to free up time for strategic discussion.
  • Discuss long term strategies at retreats where there is less pressure.
  • Reduce slides in management presentations; avoid “death by power point” and replace it with discussion.
  • Define “strategic” as matters likely to have at least a 10% positive or negative impact on “corporate value.”
  • Identify strategic subsets so as to create focus; examples include technology, competition, social trends, demographics.
  • At executive session after a meeting wherein strategy was discussed, ask: how did that meeting go?

There was general consensus that strategy is first proposed by management.  Boards heavy on CEO membership must be careful not to forget.  But asking thoughtful questions can drive the strategic discussion.

And finally, remember that the literature suggests that diverse boards make better decisions.

Flash–SEC discovers 21st Century

Yesterday the SEC issued guidance to public companies as to the use of social media (Facebook, Twitter etc.) to make announcement of corporate news.  The same ground-rules as apply to announcing news on a company website will be applied.

Regulation FD requires that material corporate news be announced in such a way as to reasonably inform EVERYONE at the same time; traditionally companies used press releases and filings on SEC Form 8K.  When companies starting announcing news on websites, the SEC advised that such prompt dissemination would be acceptable provided the public had notice that such an avenue of information would be utilized.

In a nutshell, the same controls use of social media–if you tell the public you will be announcing via social media, then the burden falls on the public to monitor those outlets and the company will not violate Regulation FD.  This is one more step in the SEC’s march towards making investors responsible for being alert; when use of websites as a news source became prevalant there was complaint that this left non-computer-users in the dark, a complaint that was ignored.  While it is likely that older investors may not spend much time with social media, it looks like the SEC similarly will not pay attention to their complaints.

Companies will need to institute discipline on the informal modalities of social media to make sure that the company (and its executives) apply SEC-type analysis to their tweets and other social media posts;  this may prove a cultural challenge for some.

One caveat: the social media outlets should be those of the company and not of the executives.

The State of Venture Capital

Three leading Boston-based venture capitalists told an audience at the monthly Association for Corporate Growth Breakfast yesterday that even though the number of venture capital funds are shrinking, there’s still too much money out in the market-place.  Additionally, significant alternate sources of financing, notably angel groups, had become increasingly available to the entrepreneur.

Representatives from Boston Millennia Partners, Atlas Ventures and Polaris Venture Partners seem to agree that making money in venture capital these days is a lot harder than it used to be.

First, average deal holding periods appear to be about 6 years these days, up from 4 years in the not too distant past; longer holding periods reduce internal rate of return, and the illiquidity implicit in a longer holding period obviously negatively impacts the limited partners.

2012 saw the birth of approximately 180 new funds, a third of which were first fund offerings.  While institutions might be gun shy of newer funds and might seek out “the brand name,” statistics indicate that smaller funds had an easier time providing superior (3X) returns.  Further, in balancing portfolios, investors need a certain percentage of their money in “alternative” investments which include venture capital; as the overall market rises, the absolute number of dollars that must be placed into these alternate investments rises also.

Perhaps the most interesting comment involved “strategy creep.”  Funds that are successful at the smaller deal size tend to capitalize on that success by raising larger funds.  As a practical matter, with a larger fund there is a greater tendency to make larger investments.  But making larger investments in different kinds of companies requires a different expertise.  Thus, there may be a tendency for successful smaller funds to end up, in their subsequent larger funds, conducting a business in which they have no experience.

Where does VC deal flow come from?  Professional referrals, but also networking, particularly with investors who made profits with a fund manager in the past.  This is particularly true in early stage investments.  In later stage investments for more mature companies, VC’s are approached by investment banks but also undertake a cold call outreach.

The panel also criticized the entrepreneur’s emphasis on valuation in a particular round; “the current round is never the round that matters.”  And the real question is whether a VC is truly a value-added partner.

How is value added?  Through team building and through introductions for strategic relationships.  Whether or not a VC is a true-value added participant perhaps can be determined by an entrepreneur speaking with CEO’s of other companies in the portfolio.

What do exits look like?  Certainly the IPO is not a reliable exit; there was some suggestion that it never really was, if you were to consider not just whether the IPO took place, but also whether the offering price was in fact sustained.  For sale to a PE firm, a company typically needs substantial cash flow.  The least risky exit is through a strategic sale.  Entrepreneurs are advised to nurture relationships with other larger companies within their space so that they are known to potential acquirers when the time comes.

Finally, the “perennial” question was asked:  How important is it to you, in running the fund, to enjoy capital gains treatment on your carried interest?  As with every other panel I have heard on this subject, the fund managing partners on the panel shrugged, looked at each other and said, in effect, that it did not matter to them at all.

A Legal Systems for Machines

In prior posts, we have established the that there is substantial current dialogue about the role of certain intelligent machines; that these machines generally are viewed as falling into the categories of “drone” or “robot”; that, as society perceives the day when drones and robots will have true “autonomy,” the pressure is mounting to establish a legal system which will address misfortunes occasioned by the actions of these machines.  It is important to stop thinking about robots in human terms and to recognize them on the same footing as we recognize drones; there is no difference between an airplane making its own decision to shoot you and a robot (which looks and sounds just like a human being) making its own decision to shoot you.

Human Rights

One significant dialogue is driven by a sensitivity to human rights.  Philosophers and the American Civil Liberties Union focus on these issues.  Machines are impinging upon our privacy and perhaps our freedom by tracking us down, spying on us, impairing our freedom both expressly and implicitly by making us know that every moment we are being watched.

These concerns primarily focus on those kinds of machines we call drones.  It is recognized that most drones currently are not autonomous.  They are not only put forth to function in the world by human beings, but also there is some human control.

In “drone-speak,” we say that these machines have either “human-in the loop” or “human-on the loop” control.  In the first category, humans not only target the person or situation to which the robot is paying attention, but also give the command to act (whether it is to intercept, injure, kill or spy upon).  In the second category, the robotic machine itself both selects the target and makes the decision to undertake the action, but a human operator can over-ride the robot’s action.

The rubber hits the road when we are in a “human out of the loop” situation; in this ultimate machine autonomy, robots select targets and undertake actions without any human input.  All the human decisions are in the history of the truly autonomous robotic device; how to build it, what it looks like, what its programming is, what its capacities are, and when and where it is being put out into the world.  Once the door is opened and the “human out of the loop” robotic device starts moving among us, there is no more direct human control.

The civil liberties critics note that military drones outside of the United States in fact hurt our national security and our moral standing, but also observe a linkage between non-U.S.-deployed drones with military application and impact within the United States.

The Department of Defense “passes down its old stuff to its little siblings,” which means that DOD gifts military equipment to domestic law enforcement agencies without charge.  A primary recipient is the much-criticized Department of Homeland Security.  Indeed, public records suggest that Department of Homeland Security drones are ready to be equipped with weapons, although the Department claims that currently all their drones are unarmed.  (Source: column by Kade Crockford in the online “The Guardian,” as guest blogger for Glenn Greenwald, March 5, 2013).

The surveillance accompanying the domestic use of drones presents problems under the Fourth Amendment to the Constitution, which holds people secure from unreasonable search or seizure.  Obviously you don’t issue to a drone a warrant to spy upon all of us.  Beyond the Fourth Amendment argument, to the extent there is a right of privacy under United States law, drones negatively impact that privacy.  The public has a right to know what rules our governments are bound by, in the utilization of machines to spy upon us, and ultimately (with autonomous machines) to police us.

We are tracked by license plates, by cell phones, by iris scans.  (We are told that these are no different from a fingerprint, although a fingerprint or DNA swab is obtained after there has been an alleged criminal act while machine surveillance is by definition “pre-act;” see particularly the Spielberg movie “Minority Report” in which the government arrests people in advance of the crimes they will ultimately be committing).

Twenty states are considering legislation to limit the use of domestic drones, including Massachusetts.  Certain cities also are taking action.  The focus is privacy and freedom from unreasonable search, on constitutional grounds.  As it is clear that some drones shortly (if they are not already) will be autonomous and will function as machines with “human out of the loop” capacity, the world must evolve towards imposing functional controls on the use of drones.

Killer Robots

The most comprehensive and cogent articulation of the legal issues presented by autonomous machines is contained in a report by the International Human Rights Clinic, part of the Human Rights program at Harvard Law School.  This November, 2012 “Report” is entitled “Losing Humanity-the Case Against Killer Robots.”

The central theme of the Report is that military and robotics experts expect that fully autonomous machines could be developed within the next twenty to thirty years.  As the level of  human supervision over these machines decreases, what laws should be enacted to protect people from actions committed by these machines?  Although the focus of the Report is primarily military (not just drones; robotic border guards, for example, are given attention), it is important to remember that a drone is a robot is a machine; the law should develop the same, whether we choose to package the problem machine into something that looks like a small airplane or something that looks like you and me.

Where does the Report come out?  It concludes that there is no amount of programming, artificial intelligence or any other possible control of a fully autonomous machine that can mimic human thought sufficiently so as to give us the kinds of controls that the “dictates of public conscience” provide to human operators.  All governments should ban the development and production of fully autonomous “weapons;” technology moving toward autonomy of machines should be reviewed at the earliest possible stage to make sure there is no slippage; and roboticists and robotics manufacturers should establish a professional code of conduct consistent with insuring that legal and ethical concerns are met.

The focus of the Report is primarily military.  I suggest that similar kinds of thinking and constraints have to be applied toward what we commonly call “robots” and particularly toward the human-like robots we will tend to surround ourselves with, because a truly autonomous machine is just that: a machine that can make mistakes un-mediated by human controls.

International Law re Weapons

There is international law concerning the utilization of all weapons.  Article 36 of Additional Protocol I to the Geneva Conventions places upon a country developing new weaponry an obligation to determine whether its employment, in some or all circumstances, would violate international law.  In commentary, it is noted that autonomous machines by definition take human beings out of the loop and we run the risk of being mastered by the technology we have deployed (remember in the Terminator when Sky-Net became “self-aware”).

Particularly addressing the proper line of thought, which is to view airplane-like drones and humanized robots the same, the Report states (at 23): “reviews [of nascent technology] should also be sensitive to the fact that some robotic technology, while not inherently harmful, has the potential one day to be weaponized.  As soon as such robots are weaponized, states should initiate their regular, rigorous review process.”

There is much discussion that autonomous robots will be unable to distinguish between civilian population and combatants, and risk harming civilians in violation of the Geneva Convention.  Particular sensitivity to this risk is raised by the so-called Martens Clause, which is actually over a century old and derived from prior international conventions; it charges governments with complying not only with international law but also with “established custom, from the principles of humanity and from the dictates of public conscience.”

How would a machine comply with the Geneva Convention?  It would have to be programmed to recognize international humanitarian law as subtly articulated in various sources, including the Geneva Conventions and “principles of humanity and. . . . dictates of public conscience.”  It would have to determine whether a particular action is prohibited.  It would then have to determine whether such action, if permissible by law, is also permissible under its operational orders (its mission).  It would have to determine whether, in a military setting, a given action met the standard of “proportionality” of response.  It would need to use an algorithm that combines statistical data with “incoming perceptual information” to evaluate a proposed strike on utilitarian grounds.  A machine could act only if it found that action satisfied all ethical constraints, minimized collateral damage and was necessary from the mission standpoint.

It is argued that machines might be able to apply these standards better than human beings, because human beings can be carried away by emotion, while machines cannot.  This is balancing.  There is discussion as to whether artificial intelligence can provide sufficient cognitive and judgmental powers to machines to approximate human capacities in this area.  The Report concludes that this is impossible.  The Report concludes that we are much safer having a human being either “in the loop” or “on the loop;” in the words of robotist Noel Sharkey: “humans understand one another in a way that machines cannot.  Cues can be very subtle, and there are an infinite number of circumstances. . . .”  Remember the computer that ran out of control in the movie War Games and almost set off global thermo-nuclear war?  That was a pretty smart and autonomous machine.  Remember the machines that set off thermo-nuclear war in Dr. Strangelove?  How smart can our machines be?  How much risk can we absorb?

The Report also notes that fully autonomous machines would “be perfect tools of repression for autocrats seeking to seize or regain power.”  Not a friendly thought.  The Report concludes that we should not develop or permit machines which are autonomous weapons.

Peaceful Robots

The same thinking flops over into those machines we call “robots” which one day will be moving among us, bearing substantially human form and preprogrammed “personalities.”  Let us say a robot programmed to provide medical aid is accidently impeded by a human being and makes the judgment that its mission to provide immediate medical assistance leads to the judgment of eliminating the intervening human being.  Let us assume that a robot (or a drone) sees two children carrying realistic toy guns and running toward a sensitive location, chased by a mother calling out “Harry, Joe, please stop, you know I don’t like seeing you play with guns.”  What if the machine misreads that situation in a way that a human being would not?

I submit that a legal system must be imposed with respect to all autonomous machines.

Private Law

Our discussion until now has addressed what I will call public law; what international law and constitutional law ought to do with respect to the control or prohibition of dangerous autonomous machines?  What about private law, or the financial liability that should be ascribed in courts when a machine runs amuck?

We currently have private tort liability laws.  These laws generally provide that a manufacturer is held strictly liable for any damage by a machine that it produces and that is inherently dangerous.  An injured party need not prove negligence of any sort; one just proves the dangerous machine was manufactured by the company.  Such a legal rule creates a wide variety of problems with autonomous machines.

First, strict liability doesn’t make much since when we are talking about machines that are wholly autonomous by definition.  Furthermore, no manufacturer would produce any autonomous machine if this were the rule of law.

Next, who is the manufacturer?  Is it the person who does the nuts and bolts?  Is it the person who does the programming?  Is it the person, the ultimate user, who defines the parameters of functionality (mission) of this combination of nuts, bolts and programs?  Or, is it not logical to hold responsible the last human being, or the employer of the last human being, who turns on the switch that permits an autonomous machine to move out into the world?

Alternately, if there is a problem with a machine, should we actually look to see if there is a design flaw in the manufacturing or in the programming?  This is different from affixing absolute liability, without such inquiry, on the theory that it is an inherently dangerous device.  How many resources would it take in a sophisticated autonomous device to answer that question?

What do you do with the machine itself? Destroy it?  Destroy all similar machines?

What do you do about the potential monetary liability of governments?  For example, our federal government is immune from being sued on a tort theory for any accident that is occasioned during the exercise of governmental powers.  Would this rule not automatically take the federal government and all its agencies off the hook if it sends out into the world a machine that kills or creates damage as part of the discharge of its governmental functions?

Again, the Report concludes that you simply must not develop autonomous weapons, and that you must prohibit governments and manufacturers from doing so.  If that be the rule, I am suggesting that we understand that there is virtually no step between a drone/weapon and a human-appearing robot with capacity to do significant harm.

Finally, to the extent we do in fact end up with autonomous machines flying over our heads, or standing next to us at the bar and ordering an IPA (even though the beer will drain into a metallic waste disposal stomach), what should we do about the private ordering of the law?  I believe that the United States government should establish an insurance program, funded by manufacturers, programmers, designers, and all utilizers of autonomous and semi-autonomous devices, to provide “no fault coverage” as an efficient method of dealing with the liabilities that all of us in society are willingly creating, as our science moves forward without regard to the antiquity of our thinking both with respect to public law and private law.