GAI Round-Up

There has been a great increase in regular GAI coverage  from usual press sources, as the press has identified the popular appetite for some of the foibles of GAI and also some of its likely short-term and long-term social and educational impacts.

I was interested to see that universities and secondary schools are relenting on the ban of the use of GAI in academic endeavors.  Such was inevitable, as the power of AI to suggest ideas and to help organize is irresistible.  The use of this tool, as well as the use of  other electronic information  sources, can be educational and mind-widening, and the focus has to move towards how best to use and not use the tool– something like trust but verify.

Less sanguine has been the general US government’s apathy in regulating the industry itself, and particularly regulating the data sets included in the vast data pools which feed GAI output.  Europe remains far ahead of us.  The US tendency to be slow in regulating new science, lest regulation quash the development of that science for the greater good, is being repeated here; and the Executive branch and Congress are so deeply immersed in the problems and politics of the day that the topic of AI does not often rise to the level of public consideration.  (Not a lot of political discussion in the upcoming election addresses GAI either from the artistic or economic standpoints.  How many candidates have you heard address this?  The warning siren of societal crises in employment and world peace having been raised, it seems that those aspects of the dialog have been replaced by newer crises-de-jour.)

As a member of the board of the New England Poetry Club and the author of five volumes of poetry, I was also struck by a couple of articles assuring us that GAI will never create great poetry because GAI output is always derivative and lacks the perception and depth of the human soul.  While I would love to believe that conclusion, there are some problems: readers sometimes fall in love with the flow of words and it is hard to feel the soul of the poet in many instances; “copying/adapting” publicly available poetry in AI-generated works is in a way copying the “soul” of the poets who originally created the poetic data base, and some of the emotional intelligence gets blended into the AI recap of prior works; alas, many current poets (self included) may write passable poetry but not great poetry and some of us may well lack sufficient soul to infuse our sometimes mundane poetic lines.

I will henceforth try to concentrate on legal issues relating to GAI, as those developments tend not to be properly covered in the popular press.  However and as you know, I am personally fascinated by the wider issues of societal impact and thus you will occasionally find me straying from my legal focus; when that happens, you have my leave to stop reading if you do not share my verve for the “big issues.”

Posted in AI

SEC Heightened Regulation of Advisers to PE, Hedge Funds

Back from a European holiday and need to catch you up on a few things.

Let me start with new regulations of SEC-registered advisers to private equity and hedge funds.  In a now-familiar pattern for increased regulation (approval by a 3-2 vote with all Democratic Commissioners in favor and both minority Republican Commissioners opposed), the Commission has materially tightened the practices of such advisers:

Advisors must now render quarterly reports to investors disclosing performance and costs, and an annual audit with a fairness or valuation opinion in connection with any opportunities to sell positions through arranged transactions.

The SEC will now permit costs incurred in most government investigations or other compliance matters only if disclosure is made to investors.

The usual practice of affording large or early investors better investment terms now also must be fully disclosed, and if preferential redemption rights are to be granted the SEC will have a chance to prohibit such rights based on SEC judgment’s that the preference creates material adverse effect on other investors.

I confess not yet to have read the new regulations, relying for the above upon review of several lawyerly sources.  If any reader wants to do a deep dive, I can link you to the 660 page SEC pronouncements.  Chairman Gensler has ruffled many feathers with his activist agenda, and not just concerning crypto and NFTs; this regulatory foray into well-functioning sophisticated markets with sophisticated parties is another example of his regulatory inclinations.

Logic suggests the new reporting regimes will incur advisor costs which will increase fees passed on to the investing public.  The cost-benefit analysis for the investors is never easy to  gauge….

GAI Goes to the Movies

The new movie Oppenheimer spends three hours tracking the rise and fall of the inventor of the A-bomb; no doubt you have heard of this opus, which is so good that it even comes close to the ratings of Barbie.  

What is most interesting, however, is that I am seeing articles and columns comparing Dr. Oppenheimer’s reaction to his invention to the invention of GAI.  GAI is feared by some as the instrument of death to humankind under several theories, one of which is the “Skynet” scenario (from the Terminator movies–if you have been living under a rock for two decades, let me inform you that that premise is that computers become self-aware and try to kill all humans as enemies).

Are the inventors of GAI the unwitting heirs to Oppenheimer’s observation “Now I become Death, the Destroyer of Worlds”?   My conversations with a few scientists assure me that this is simply not possible (the other night I was told “AI is only math” and in any event the bomb was intended by humans to kill humans which is different).

There is no answer to the probability of the theoretical construct of computers killing us all, either because they become like humans or more likely because that could be a logical deduction if you are programmed to prevent let us say global warming or warfare among humans.  Or if your AI program just hallucinates.  Seems to me that stating that an error can never happen, as opposed to it being unlikely, is unprovable.

SO you folks can chew on that a bit while I take a few weeks’ vacation, where I may well not post anything, let alone something about AI.  Unless while in France I get a different set of insights or reactions from my French acquaintances.

“Je suis devenue la mort, le destructeur des mondes”?

 

Old-Time Anti-Trust Enforcement

This week the Department of Justice and the Federal Trade Commission released draft guidelines relating to permissible mergers, that is mergers that they will not challenge and attempt to prevent.  The guidelines are substantially different from, and materially less favorable than, guidelines they propose to erase and harken back to policies from the ’60s and 70’s.  Indeed, the guidelines make specific citation to older court cases, likely because more recent cases are more permissive in not challenging mergers.

What is going on?  One could conclude that the guidelines are an attack on size per se, but that is not sufficiently nuanced an observation.  Specific policy considerations are apparent in some of the extremely complex details:

First, you can no longer sell a merger on the basis that it will reduce costs to consumers, a presumed touchstone.  Now the impact on workers, and their ability to obtain employment, must be considered.

Second, given the trend in acquisitions to try to tie up supply of critical components, there will be an examination to make sure that a vertical acquisition will not so tie up critical supplies as to harm business with which the acquirer competes.

Third, the general trigger for focus on an acquisition, previously articulated as probably creating adverse effect, is now keyed to the risk of negative effect (a lesser standard).

Fourth, if a company is doing a series of acquisitions which in the aggregate have anti-competitive effect, the ability to go back and examine the whole chain of acquisitions is expressly reserved.

‘Fifth, the safe harbors of the old guidelines are eliminated; there is no statement of what will NOT be deemed an undesirable acquisition.

There is included a long list of core concepts, an analysis of which is beyond the scope of this post.  If you are thinking about acquiring a supplier or a competitor, in many cases the first thing you will need to acquire is a lawyer.

Note that this is a draft, open for public comment into September.  If adopted these are guidelines for actions to be taken by the government, but do not constitute a law; courts over the last decades have been more permissive about acquisitions and while they may be influenced by guidelines they are free not to follow them.  But whatever is adopted foretells a longer period to navigate through getting government approval in many circumstances, itself likely a dynamic which will somewhat limit the urge to merge.

Stephen King, Tom Stoppard and GAI

All the above are famous writers.  The first two have copyright protection.

Writers can copyright their works.  What protection does a human writer have who claims protection against copying when a text is generated by GAI on instruction from that human?

The US Copyright Office grants protection only to work with significant human involvement; we call that human the “author.”  But each work by GAI is created upon command or request by a human.  Is that work protectable by copyright, because GAI does not awaken late at night, turn itself on and write a ghost story or create an image without a human telling it to and giving it some sort of guidance–the human is involved.  How much guidance need the human give to GAI before the human can be said to be an “author” or an artist?

There is a pending legal case on appeal from the refusal by the Copyright Office to grant a copyright to a person who created a digital piece using AI.  The artist, who won an art award with the piece, claims his work used AI as a tool of his artistic expression.  Where is the line?  How much must the person “do” to be the author or artist, and thus to be protected from “infringement”?

The dividing line is a matter of judgment and experts are drafting proposed guidance to be applied by the Copyright Office; last guidance in March suggested that AI-generated work can be protected if a human selects or arranges AI materials with sufficient creativity that “the resulting work as a whole constitutes an original work of authorship.”  Any reader want to try to apply that standard in a consistent, legally useful manner to a diverse series of art works, poems or stories?

A photographer takes a picture with a Leica film camera, a machine, and prints it through a commercial printer without alteration.  It can be subject to copyright.  A photographer takes a picture with my new Apple I-Phone 14 (or the rumored and inevitable 15, expected in a couple of months) and prints it right off the phone electronics with no alteration.  It can be subject to copyright.  An artist commands AI to make an image of a horse walking on water, says no more, and prints the AI result with no alteration.   Can you  get a copyright of that horse walking on water, in the name of the photographer?  Tomorrow, a person in Paris, not having seen the prior work, asks AI to make an image of a horse walking on water and says no more, and AI again produces the same, or virtually the same, image– is the Paris picture an illegal infringement?

Disclaimer: Note that an experienced Intellectual Property attorney to whom I showed a draft of this post suggested that existing law has dealt with these kinds of issues many times and that there are a lot of pre-AI legal precedents to apply.  He also did not think much of a copyright claim for my horse walking on water; perhaps my exact version would have narrow coverage, but likely not even that, let alone protection against a subsequent AI version.  I do not question him, although I suspect that with the growth of GAI there is a growing push to expand the copyright protection and alter what in simpler times might be somewhat settled law. (This post speculates on issues that will be raised in the future; as with all posts, and per my blog site ground-rules, this post is not legal advice and should not be relied upon as such.)

[Disclosure: the contents of this post are based upon reportage contained in, and concepts suggested by, the  article entitled “How to Think About AI,” Harvard Law Bulletin, Summer 2023. In case you are wondering, that article is protected by copyright.]

AI Issues in Laws Affecting Health Care

AI has a significant role, and a growing role, in healthcare.  AI can apply massive date to improve diagnosis, treatment program design, drug development and robotic surgery.  Medical literature doubles every seven years, promising massive updated data to be processed for analysis by GAI.

Patient concerns: Studies reveal that patient presently are distrustful of advice not delivered by a physician.  Pathologists and radiologists see AI as the death-knell of their professions.  Surgical robotics is viewed as lacking empathy (Health IT Analytics, 3-2-2022).  Handling individual medical data within an expanded AI system runs risks of loss of data privacy.

Who is liable for bad results? If there is an AI diagnosis that is wrong, who is liable?  If a robot removes the wrong organ, who is liable?  How do you “see” the key medical decision points, and who/what makes them?  When will the health care system become sufficiently adept at handling AI so that the benefits outweigh the risks?  Since GAI hallucinates (reaches wrong answers within itself), does that mean the programmer is liable?  The supplier of the AI?  The doctor who failed to pick up an allegedly obvious anomaly? If there is an error, how do you know if it is the AI’s inherent lack of data, its misreading of the data, or a malfunction of the device that reads and delivers AI’s answer?

Patient waiver issues: Will medical ethics require a physician, clinic or hospital to give affirmative disclosure that AI was utilized in health care delivery?  In what detail?  Will courts enforce a patient release signed after such a disclosure, which release is to some degree dependent on the fullness of the data concerning risk which has been given the releasing patient?  Since AI may deliver a statistically correct result and since it is not likely to be 100% effective in every case, even if the AI and the computer work perfectly, must there be a disclosure chart showing percentages of negative results?  Can a physician or hospital obtain a waiver of warranty, selling medical advice based on AI “as is”?

The curious reader might look at an article entitled “Artificial Intelligence is Altering the Face of Medicine.” Practical Lawyer, August, 2023.

SEC Approach to AI

The obvious first issues that the SEC should be concerned about, as GAI advances, are that businesses do not lie about either their use of AI or how they protect themselves from being harmed by third party GAI. In fact, the SEC seems interested in far more.

The SEC today released the text of remarks by Chair Gary Gensler.  Aside from being an advocate for strong SEC regulation, Gensler was a professor at MIT, and his speech to the National Press Club ranges from Isaac Newton and the bubonic plague to the following litany of regulatory concerns:

  • Advisers and brokers will rely on GAI to make investment recommendations.  Will the AI they use be set up to maximize return to the user (the broker, let us say) or to maximize return to and protection of the investor?
  • GAI builds its conclusion based on masses of data gathered from multiple sources, including huge amounts of data from every individual; who owns that data, and thus the intelligence culled from its manipulation?  [Query why this is SEC mandate?]
  • “Bad actors” can use GAI to manipulate capital markets, to spook the public, to influence elections. [Some of that seems the bailiwick of the Federal Elections Commission]
  • “For the SEC, the challenge here is to promote competitive, efficient markets in the face of what could be dominant base layers at the center of the capital markets.”  This follows the query about who owns the data culled to enhance AI operation.  Is he asking whether we will have honest markets if everyone uses the same GAI based on the same base layer of information?  [This is an SEC issue?]
  • “AI may heighten financial fragility” if a “herd of individual actors” make similar decisions “because they are getting the same signal from a base model or data aggregator,” which in turn could “play a central role in the after-action reports of a a future financial crisis.”

Seems to me that the SEC better start hoping that people as smart as Isaac Newton start applying for jobs, rather than the young people coming out of law school looking for an entry level foot-hold in the legal or business community.  And can you imagine the reaction of the minority Republican Commissioners, already deeply troubled by Gensler’s broad view of SEC regulatory purview?

SEC Focus on FDA-Regulated Companies

The FDA has a Task Force with focus on companies subject to FDA regulation/clearances.  It should be remembered that while the SEC regulates, and monitors disclosure by, public companies, much of what they do also covers private enterprises which raise capital, including early-stage med-tech and bio companies.  The focus of SEC activity is to prevent misstatements and over-statements made to new investors or (for public companies) into the public marketplace.

Many investigations arise by reason of inaccuracies in materials utilized by emerging companies in private fund-raising, whether for stock offerings or other “securities” such as SAFEs and Convertible Notes.  These misstatements can arise in connection with oral presentations, offering “decks,” private placement memoranda or other written material. This includes articles or studies prepared by third parties, based on misstatements or exaggerations made by companies and which are repeated by third party sources.  Companies offering securities of any sort need to centralize and review disclosures of all types by all personnel; which  includes scientists and researchers who may not be attuned to the impact and scope of the laws concerning sale of securities.

The types of typical misstatements often relate to a misstatement of the science, the status and success of trials, the existence and scope of customer base, failure to clarify that certain “sales” are really beta tests, and particularly ambiguous or misleading statements concerning company status vis-a-vis FDA guidance or approval.

While one might expect that most complaints come from ultimately dissatisfied investors and arise many months (or years) later, overly enthused senior management should note that very often the SEC is “tipped” by inside whistleblowers who are uncomfortable with what is being said about the company offerings which seem excessive to scientific employees who are finely attuned to the granular accuracy of data concerning the science.

Having company disclosure substantively reviewed by outside counsel prior to its use is an excellent way, absent actual intent to defraud, to protect against an SEC criminal action based on material misstatement; the SEC may well take civil action and also cause corrective disclosure or a rescission offer to investors, but such is far more palatable than a criminal charge.

Finally, and this relates to public companies, the SEC focuses on trading upon material non-public information, whether through formal trading programs (so-called 10b5-1 plans or otherwise), and trading in shares of companies other than one’s own company based on insights obtained (by example) by reason of inter-company collaboration or trade group disclosures.

With so much funding going into med-tech of all sizes, it is not surprising that the SEC has a specific Task Force in this vertical, and companies should be aware of the particular SEC focus  which does not make different rules for med-tech but does create heightened scrutiny.

GAI Users Speak Up

Current press coverage is full of articles and columns extolling the advantages of AI in business.  These articles form a counterpoint to the scare risk scenarios which grabbed the early headlines.  The gist is that at least in its current form, GAI can speed reasonably simple but time-consuming tasks.  Articles have appeared very recently in the NYTimes, Boston Globe and in the current issue of Boston Business Journal.

The BBJ coverage is revealing, in making an excellent case for the controlled utilization of ChatGPT on the part of founders of early-stage companies.  The highlighted entrepreneurs, all in their mid-thirties, claim spectacular time savings in such tasks as: analyzing and comparing big-company franchise agreements against other franchise agreements to identify differences for clients; designing sales materials for digital platforms; building presentations; using Chatbox as a customer, practicing interactions.

Surely these are modest undertakings by modest enterprises, and surely large enterprises are or will be more intense users and at higher levels of output.  It is unclear whether the fear of GAI ‘hallucinations” (inventions of facts) is of concern in applications such as suggested here.  It seems that today’s folks are relying on either comparing extant texts or phrasing or modeling given data (eg, inputting clear instructions and asking GAI to phrase, package, prepare for posting the facts/ideas given).

Then there was the article I read over the weekend, not sure where, about a  person who had chatbox write a get-well-soon poem to someone who was ailing.  Seems it was so literate and uplifting that the writer now has  used chatbox for many personal messages.  Perhaps the folks who write Hallmark cards should begin to worry…

And as to the last thought, given the nature of modern poetry, where fact and fiction free-associate without necessarily making a linear narrative, it sounds like chatbox may be the next Walt Whitman.  As someone who has published five books of poetry and sits on the board of the New England Poetry Club, I personally have, indeed, begun to worry that one day we will give an annual poetry award (we grant several) to a circuit board.

Generative AI Marches On

Notwithstanding the call for slowing down GAI until controls can be applied, businesses are charging ahead, according to today’s NYTimes.  (I hate to say it, but per my immediately prior post, was the writer correct that the drive for profit trumps all other considerations?)

Major companies (Oracle, Salesforce, AT&T, Amazon) are providing GAI business products that: help engineers produce new code; create sales material and product descriptions for marketing; answer employee questions; summarize meeting notes and lengthy documents.

Further, Gartner reports over half of business customer-users have no internal policy on GAI usage, which is unsettling (I note a very small sample size, however.)

Meanwhile, nothing is heard from Congress, and precious little beyond passing mention from the White House. In an earlier post, I noted that the tendency in the US, when it comes to new tech, is to let it advance unencumbered for some time, then step back and evaluate risk and need for regulation based on what has actually happened “on the ground.”  This reflects the desire to foster the entrepreneurial spirit and to keep the US ahead of the world in new technologies.  While I have no direct line to either the Congress or President Biden (who falls into the vast category of people who have never bought me a beer), it seems that our government is going to let the AGI beast run wild for a bit longer, notwithstanding the various warnings voiced by the gamekeepers.

Posted in AI