New Copyright Case: AI-Generated Art Cannot Be Protected

I have previously posted about the issue of whether US copyright protection is available for art or text created by use of artificial intelligence.  On August 18, the United States District Court for the District of Columbia denied copyright protection to the creator of a piece of visual art where the owner of a computer system claimed copyright and listed the computer system as the author.  As owner of the computer system, the applicant sought that the computer’s copyright should transfer to him as owner of the machine.

Copyright was denied as the Court held that human authorship is essential for a valid copyright claim.

In this case the claimant invented his own programs designed to create art “akin to the output of a human artist.”  The piece in question was modestly entitled “A Recent Entrance to Paradise.”

It should be noted that as soon as a work of art or writing is created it has an automatic copyright and the author can prevent others from using it; registration confirms that the work always has the benefit of copyright protection and that damages can be obtained for unauthorized use.  If registration is formally denied, that means the work had no copyright protection from the moment it was created.

There is a bit of procedural law at work here also.  When you go to court to question an administrative decision of a governmental agency, the court does not start from ground zero to consider the merits.  Rather, the court must recognize the expertise and mission of the governmental agency and thus the “only question properly presented, then, is whether the Register [of copyright] acted arbitrarily or capriciously” or violated proper procedure in the process of reaching its decision.

The Court here reviewed the history of copyright and concluded that the denial was consistent with past practice, rejecting an analogy to protectable photographs created by machine because in photographs there is substantial human input (the photographer’s “mental conception” of the result, the posing, the setting of  subject within the frame, the use of light and shadow).

The opinion is well written and has some rich historical interest, discussing the treatment of copyright in the Constitution and quoting James Madison;  The curious can access this case, Stephen Thaler vs. Shira Perlmutter, Register of Copyrights, at the US District Court of DC, Civil Action No. 22-1564 (BAH).

 

 

AI Regulation by the States?

In the face of Federal US failure to actually regulate AI, the states are beginning to act on their own.

We have previously posted about the cautious pace of US Federal regulation of AI to limit risk and fraud, and speculated that such a hands-off approach to new technology is an American characteristic: let new tech flourish and do not interfere, as in the long run it will drive US primacy in the business and scientific worlds.  We have also noted that the EU has been far more aggressive in putting current regulation in place.

Noting the failure of the Feds to take action, the National Conference of State Legislatures recently published a report discussing what AI regulation should look like in financial services, healthcare and education. In fact, fourteen states have already enacted some legislation relating to control of AI, and another 15 have introduced legislation, much focused on setting up research committees to work on legislative initiatives.

Among areas of focus in the report: preventing use of AI in hiring due to fear of bias in the data relied upon; monitoring algorithms utilized in making decisions in regulated areas including education, housing and banking; protecting intellectual property rights, which can be trampled when AI sucks up information that is proprietary and incorporates it into machine analyses; establishing a common vocabulary for states to use in enacting legislation so there can be understanding and coordination.

This last suggestion of a common vocabulary, which is facially logical, discloses the fundamental weakness of hiving separate laws in a multiplicity of jurisdictions: AI crosses borders continually, companies developing and using AI do not operate in or impact just one state, and for lawyers and businesspersons trying to comply with multiple laws is expensive and also not wise.  What is needed is National legislation that preempts the field, replaces state laws and creates a uniform national standard.

Now the Federal Congress can let a hundred flowers bloom at the state level, evaluate impact and wisdom of these various schemes, and then adopt uniform laws that supersede inconsistent state regulation, having used state law temporarily as a laboratory for evaluating what is wise and what is not.  That leave chaos at the beginning and resistance by states to abandon what their own legislatures have enacted and thus feel is simply too good to lose.

But between a propensity to move slowly in Federal tech regulation, and the complex problems presented by regulating AI, we may have no choice but to end up with a Balkanized AI regulatory landscape.

 

Posted in AI

A Personal Note: A New Publication

Excuse this brief personal announcement: I have just published my seventh book, and my first novel.  It is an adventure novel based on the release of a weaponized bio agent and it can be ordered through Amazon (best found by searching my full name, Stephen M Honig).

If you order it and read it, do let me know what you think.  Thanks.

Corporate Directors Need to be GAI-Aware

This post is a recommendation to corporate directors and C-level executives to take a look at the magazine Directorship, which is available to members of the National Association of Corporate Directors.  [Alert: I am a member of the New England Chapter of NACD and work on our educational programming.]  For those without access, see below for an  outline of issues which should be on your agenda.

The recently released issue designated “Quarter Three 2023” has a cover story on “Artificial Intelligence–What Boards Need to Know” that ought to be  required reading for corporate directors. Passing the thought that anyone who sits on a Board has a fiduciary duty to be knowledgeable as to major technological changes and that NACD is a great way of fulfilling that duty, AND passing the fact that public companies are responsible to public investors (as enforced by the SEC) to fully disclose the uses and risks and impact of AI within a reporting company, simple logic tells all directors that they need to watch carefully several areas at a minimum:

The use of AI by your company and how it is controlled, to avoid errors in management, planning, marketing, procurement and labor matters;

The impact on labor force so that your workers are attuned to the use and misuse of AI, not to mention the impact on headcount and HR planning and implicit hiring bias through use of certain AI-supported tools;

The legal liability and attendant cost of errors occurring by reason of misuse of AI, or reliance without verification of actions based on AI research (as the most powerful AI tools, GAI, are prone to invent answers on their own (called “hallucinations”);

Evaluation of AI through a cybersecurity lens.

Boards need either a member attuned to AI use and risks or an extrinsic source to guide it.  Underneath the hype and the scare scenarios, this is an informational and operational tool that will in fact drive progress and profit in successful companies, and the proper use of this tool within a company is going to be part of basic blocking and tackling for boards of the future.

I recommend either a board committee to deal with GAI issues or a clear statement to an extant board committee charging responsibility and asking for periodic reports to the board as to issues perceived; this could go to a Risk Committee or a Technology Committee for example.  I would not recommend sending this to Audit, as that committee is typically over-burdened already and as the qualifications for serving on F&A are not necessarily congruent with being GAI-qualified.

 

GAI Round-Up

There has been a great increase in regular GAI coverage  from usual press sources, as the press has identified the popular appetite for some of the foibles of GAI and also some of its likely short-term and long-term social and educational impacts.

I was interested to see that universities and secondary schools are relenting on the ban of the use of GAI in academic endeavors.  Such was inevitable, as the power of AI to suggest ideas and to help organize is irresistible.  The use of this tool, as well as the use of  other electronic information  sources, can be educational and mind-widening, and the focus has to move towards how best to use and not use the tool– something like trust but verify.

Less sanguine has been the general US government’s apathy in regulating the industry itself, and particularly regulating the data sets included in the vast data pools which feed GAI output.  Europe remains far ahead of us.  The US tendency to be slow in regulating new science, lest regulation quash the development of that science for the greater good, is being repeated here; and the Executive branch and Congress are so deeply immersed in the problems and politics of the day that the topic of AI does not often rise to the level of public consideration.  (Not a lot of political discussion in the upcoming election addresses GAI either from the artistic or economic standpoints.  How many candidates have you heard address this?  The warning siren of societal crises in employment and world peace having been raised, it seems that those aspects of the dialog have been replaced by newer crises-de-jour.)

As a member of the board of the New England Poetry Club and the author of five volumes of poetry, I was also struck by a couple of articles assuring us that GAI will never create great poetry because GAI output is always derivative and lacks the perception and depth of the human soul.  While I would love to believe that conclusion, there are some problems: readers sometimes fall in love with the flow of words and it is hard to feel the soul of the poet in many instances; “copying/adapting” publicly available poetry in AI-generated works is in a way copying the “soul” of the poets who originally created the poetic data base, and some of the emotional intelligence gets blended into the AI recap of prior works; alas, many current poets (self included) may write passable poetry but not great poetry and some of us may well lack sufficient soul to infuse our sometimes mundane poetic lines.

I will henceforth try to concentrate on legal issues relating to GAI, as those developments tend not to be properly covered in the popular press.  However and as you know, I am personally fascinated by the wider issues of societal impact and thus you will occasionally find me straying from my legal focus; when that happens, you have my leave to stop reading if you do not share my verve for the “big issues.”

Posted in AI

SEC Heightened Regulation of Advisers to PE, Hedge Funds

Back from a European holiday and need to catch you up on a few things.

Let me start with new regulations of SEC-registered advisers to private equity and hedge funds.  In a now-familiar pattern for increased regulation (approval by a 3-2 vote with all Democratic Commissioners in favor and both minority Republican Commissioners opposed), the Commission has materially tightened the practices of such advisers:

Advisors must now render quarterly reports to investors disclosing performance and costs, and an annual audit with a fairness or valuation opinion in connection with any opportunities to sell positions through arranged transactions.

The SEC will now permit costs incurred in most government investigations or other compliance matters only if disclosure is made to investors.

The usual practice of affording large or early investors better investment terms now also must be fully disclosed, and if preferential redemption rights are to be granted the SEC will have a chance to prohibit such rights based on SEC judgment’s that the preference creates material adverse effect on other investors.

I confess not yet to have read the new regulations, relying for the above upon review of several lawyerly sources.  If any reader wants to do a deep dive, I can link you to the 660 page SEC pronouncements.  Chairman Gensler has ruffled many feathers with his activist agenda, and not just concerning crypto and NFTs; this regulatory foray into well-functioning sophisticated markets with sophisticated parties is another example of his regulatory inclinations.

Logic suggests the new reporting regimes will incur advisor costs which will increase fees passed on to the investing public.  The cost-benefit analysis for the investors is never easy to  gauge….

GAI Goes to the Movies

The new movie Oppenheimer spends three hours tracking the rise and fall of the inventor of the A-bomb; no doubt you have heard of this opus, which is so good that it even comes close to the ratings of Barbie.  

What is most interesting, however, is that I am seeing articles and columns comparing Dr. Oppenheimer’s reaction to his invention to the invention of GAI.  GAI is feared by some as the instrument of death to humankind under several theories, one of which is the “Skynet” scenario (from the Terminator movies–if you have been living under a rock for two decades, let me inform you that that premise is that computers become self-aware and try to kill all humans as enemies).

Are the inventors of GAI the unwitting heirs to Oppenheimer’s observation “Now I become Death, the Destroyer of Worlds”?   My conversations with a few scientists assure me that this is simply not possible (the other night I was told “AI is only math” and in any event the bomb was intended by humans to kill humans which is different).

There is no answer to the probability of the theoretical construct of computers killing us all, either because they become like humans or more likely because that could be a logical deduction if you are programmed to prevent let us say global warming or warfare among humans.  Or if your AI program just hallucinates.  Seems to me that stating that an error can never happen, as opposed to it being unlikely, is unprovable.

SO you folks can chew on that a bit while I take a few weeks’ vacation, where I may well not post anything, let alone something about AI.  Unless while in France I get a different set of insights or reactions from my French acquaintances.

“Je suis devenue la mort, le destructeur des mondes”?

 

Old-Time Anti-Trust Enforcement

This week the Department of Justice and the Federal Trade Commission released draft guidelines relating to permissible mergers, that is mergers that they will not challenge and attempt to prevent.  The guidelines are substantially different from, and materially less favorable than, guidelines they propose to erase and harken back to policies from the ’60s and 70’s.  Indeed, the guidelines make specific citation to older court cases, likely because more recent cases are more permissive in not challenging mergers.

What is going on?  One could conclude that the guidelines are an attack on size per se, but that is not sufficiently nuanced an observation.  Specific policy considerations are apparent in some of the extremely complex details:

First, you can no longer sell a merger on the basis that it will reduce costs to consumers, a presumed touchstone.  Now the impact on workers, and their ability to obtain employment, must be considered.

Second, given the trend in acquisitions to try to tie up supply of critical components, there will be an examination to make sure that a vertical acquisition will not so tie up critical supplies as to harm business with which the acquirer competes.

Third, the general trigger for focus on an acquisition, previously articulated as probably creating adverse effect, is now keyed to the risk of negative effect (a lesser standard).

Fourth, if a company is doing a series of acquisitions which in the aggregate have anti-competitive effect, the ability to go back and examine the whole chain of acquisitions is expressly reserved.

‘Fifth, the safe harbors of the old guidelines are eliminated; there is no statement of what will NOT be deemed an undesirable acquisition.

There is included a long list of core concepts, an analysis of which is beyond the scope of this post.  If you are thinking about acquiring a supplier or a competitor, in many cases the first thing you will need to acquire is a lawyer.

Note that this is a draft, open for public comment into September.  If adopted these are guidelines for actions to be taken by the government, but do not constitute a law; courts over the last decades have been more permissive about acquisitions and while they may be influenced by guidelines they are free not to follow them.  But whatever is adopted foretells a longer period to navigate through getting government approval in many circumstances, itself likely a dynamic which will somewhat limit the urge to merge.

Stephen King, Tom Stoppard and GAI

All the above are famous writers.  The first two have copyright protection.

Writers can copyright their works.  What protection does a human writer have who claims protection against copying when a text is generated by GAI on instruction from that human?

The US Copyright Office grants protection only to work with significant human involvement; we call that human the “author.”  But each work by GAI is created upon command or request by a human.  Is that work protectable by copyright, because GAI does not awaken late at night, turn itself on and write a ghost story or create an image without a human telling it to and giving it some sort of guidance–the human is involved.  How much guidance need the human give to GAI before the human can be said to be an “author” or an artist?

There is a pending legal case on appeal from the refusal by the Copyright Office to grant a copyright to a person who created a digital piece using AI.  The artist, who won an art award with the piece, claims his work used AI as a tool of his artistic expression.  Where is the line?  How much must the person “do” to be the author or artist, and thus to be protected from “infringement”?

The dividing line is a matter of judgment and experts are drafting proposed guidance to be applied by the Copyright Office; last guidance in March suggested that AI-generated work can be protected if a human selects or arranges AI materials with sufficient creativity that “the resulting work as a whole constitutes an original work of authorship.”  Any reader want to try to apply that standard in a consistent, legally useful manner to a diverse series of art works, poems or stories?

A photographer takes a picture with a Leica film camera, a machine, and prints it through a commercial printer without alteration.  It can be subject to copyright.  A photographer takes a picture with my new Apple I-Phone 14 (or the rumored and inevitable 15, expected in a couple of months) and prints it right off the phone electronics with no alteration.  It can be subject to copyright.  An artist commands AI to make an image of a horse walking on water, says no more, and prints the AI result with no alteration.   Can you  get a copyright of that horse walking on water, in the name of the photographer?  Tomorrow, a person in Paris, not having seen the prior work, asks AI to make an image of a horse walking on water and says no more, and AI again produces the same, or virtually the same, image– is the Paris picture an illegal infringement?

Disclaimer: Note that an experienced Intellectual Property attorney to whom I showed a draft of this post suggested that existing law has dealt with these kinds of issues many times and that there are a lot of pre-AI legal precedents to apply.  He also did not think much of a copyright claim for my horse walking on water; perhaps my exact version would have narrow coverage, but likely not even that, let alone protection against a subsequent AI version.  I do not question him, although I suspect that with the growth of GAI there is a growing push to expand the copyright protection and alter what in simpler times might be somewhat settled law. (This post speculates on issues that will be raised in the future; as with all posts, and per my blog site ground-rules, this post is not legal advice and should not be relied upon as such.)

[Disclosure: the contents of this post are based upon reportage contained in, and concepts suggested by, the  article entitled “How to Think About AI,” Harvard Law Bulletin, Summer 2023. In case you are wondering, that article is protected by copyright.]

AI Issues in Laws Affecting Health Care

AI has a significant role, and a growing role, in healthcare.  AI can apply massive date to improve diagnosis, treatment program design, drug development and robotic surgery.  Medical literature doubles every seven years, promising massive updated data to be processed for analysis by GAI.

Patient concerns: Studies reveal that patient presently are distrustful of advice not delivered by a physician.  Pathologists and radiologists see AI as the death-knell of their professions.  Surgical robotics is viewed as lacking empathy (Health IT Analytics, 3-2-2022).  Handling individual medical data within an expanded AI system runs risks of loss of data privacy.

Who is liable for bad results? If there is an AI diagnosis that is wrong, who is liable?  If a robot removes the wrong organ, who is liable?  How do you “see” the key medical decision points, and who/what makes them?  When will the health care system become sufficiently adept at handling AI so that the benefits outweigh the risks?  Since GAI hallucinates (reaches wrong answers within itself), does that mean the programmer is liable?  The supplier of the AI?  The doctor who failed to pick up an allegedly obvious anomaly? If there is an error, how do you know if it is the AI’s inherent lack of data, its misreading of the data, or a malfunction of the device that reads and delivers AI’s answer?

Patient waiver issues: Will medical ethics require a physician, clinic or hospital to give affirmative disclosure that AI was utilized in health care delivery?  In what detail?  Will courts enforce a patient release signed after such a disclosure, which release is to some degree dependent on the fullness of the data concerning risk which has been given the releasing patient?  Since AI may deliver a statistically correct result and since it is not likely to be 100% effective in every case, even if the AI and the computer work perfectly, must there be a disclosure chart showing percentages of negative results?  Can a physician or hospital obtain a waiver of warranty, selling medical advice based on AI “as is”?

The curious reader might look at an article entitled “Artificial Intelligence is Altering the Face of Medicine.” Practical Lawyer, August, 2023.