GAI–Hinton is Scared!?!

Who is Hinton?  The godfather of AI, winner of the Turing Prize, left Google earlier this year due to the potential “existential threat” that AI presents to the human species.  Single most quoted expert on GAI in the world, period.   Not a guy who has been educated by sci-fi movies.  He is not blindly following Skynet here….

Why is Hinton scared?  According to the ten-page New Yorker interview (November 20, 2023 issue): unlike human brains, which die when the carcass dies, digital intelligence can be used on many other computers simultaneously and forever. Thousands of neural networks “can learn ten thousand different things at the same time, then share what they’ve learned.”  As digital learning combines immortality and infinite replicability, “we should be concerned about digital intelligence taking over from biological intelligence.”

Hinton thinks machines do “think” like people.  Like people, computers work by analogy with a sprinkling of reason on top. In the next decade AI will overcome its current difficulties with “physical intuition,” which is part of animals but not wholly a matter of  intelligence (example: a cat with limited intelligence can still jump on a series of pieces of furniture and a machine cannot cause that happen–yet.)  Then AI’s skill set will be complete.

Hinton fears that the human drive for control will cause a dictator to deploy an autonomous lethal weapon.  And worse yet, since machines that are human-like will themselves crave more control by following neural paths similar to humans, “…how do you prevent them [computers]  from ever wanting to take control?  And nobody knows the answer.”

Lots of people with lots of technical skill are out in the marketplace, running the race to get their country or their company to a point of technical superiority.  Along comes “THE” expert in all of this, and he sounds like a screen-writer for James Cameron.

Let’s talk movies:

When John Badham  filmed War Games (incredibly, 40 years ago), the WHOPPER computer, which had started the ultimate world war just because it could, at the last minute stepped back and started a game of tic-tac-toe, observing that the only way to win the thermo-nuclear war game was “not to play.”  When Cameron did Terminator, his autonomous weapons system did not stop.  When Kubrick filmed Dr. Strangelove, the Russian autonomous weapons system did not stop.  Hinton is telling us that Badham wrote a fairy tale in War Games–that real GAI would not stop.

Yes, I “researched” this blog post in New Yorker magazine and in sci-fi movies.  But that is not what the godfather of GAI did–is it?

 

 

GAI Faces the Music

No, this is not a post about Sam Altman being ousted by Open AI (even though at this very moment the final outcome is unclear).  This post is about how another AI company was sued by music publishers, seeking to enjoin the use of copywritten words and music employed by AI to train its LLMs (large-language models).

When asked by users to write a poem in the style of a given writer, the program combined lyrics from songs from two different singing groups; the AI company neither asked permission for use nor gave credit to the artists. Can style plus phrases violate copyright?

These kinds of suits, filed in Federal court, are testing the degree of protection from using all types of intellectual property which is subject to copyright protection.  The LLMs plug in parts of protected works, and are following programming orders to use the information they absorbed from billions of scanned data points. [The AI company involved in this particular litigation had installed protections against certain  data use (no medical or legal advice, no support of illegal activities), but no guardrails re use of music to compose.]

As AI absorbs words, ideas and images from huge data bases, using those pieces to create new “works” from parts contained in the “original,” how close must a “copy” be in order to offend legal protections?   Does it matter that there is no specific intent to steal, since the product is created by a programmed device which pools everything? What sort of swimming lanes will courts or the Copyright Office mandate?  Is copying a “style” prohibited?  How to deal with “mash-ups” where fragments from different works are combined, yet fan-identifiable by viewers, readers or listeners?

When the U.S. Copyright Office asked for comments on what sort of  AI guard-rails should be imposed, it received 10,000 responses.  I wonder if the government is using artificial intelligence to digest all that data?  I wonder if those artificial intelligence programs will skew the comments in acts of self-protection?

Posted in AI

Why Class Actions Fail

The business press is always carrying stories about suits brought against companies because the company: made a public misstatement that affected the price of its publicly traded shares; the statement was later proven incorrect; and then the stock price fell and the purchaser of shares at the higher price suffered economic harm.  Such cases typically are brought as a “class action;” one or a few injured shareholders make a claim for themselves and for all other persons similarly harmed by the same facts.  Some law firms make their living by bringing such shareholder class actions.

When such suits are not settled and are looked at by the courts, many are thrown out.  The reason often is that the misstatements are not willful.  Generally speaking, to sustain such a suit it must be proven that a misstatement is untrue, is material, and was uttered with what is known in the law as “scienter,” an intent to defraud.  There are many cases where investors lose money based on relying on a statement about, say, a new product, wherein the statement proves false, loss is incurred, but the plaintiffs do not allege or cannot prove scienter.  A recent Massachusetts case against a company called Desktop Metal, Inc. was dismissed in part because there was no claim made that the statements about product quality and FDA compliance were uttered intentionally in order to defraud.

What if the company was just shooting from the hip by announcing that its new product was innovative and superior?  What if the announcement was believed by the company, but the company failed to investigate the marketplace and thus was negligent in its remarks? Scienter is absent, but the company did not do the right thing, and the investors did lose money through no fault of their own, in reliance on the company’s very own words.

The question of whether negligence, or gross negligence, can replace actual intent (scienter) has been much litigated and the details are beyond the scope of this blog, but at the risk of being simplistic let me tell you the answer (or at least “an” answer): Federal courts are divided on this issue, some insist on actual intent, while some will consider some severe degree of negligence (sometimes stated without clear definition as “recklessness”) as being the equivalent of scienter and thus justifying recovery for the shareholders.

I offer a take-away for all investors: be careful in relying on company pronouncements leading you to believe that share price will shoot up.  Most public companies do not want to mislead and all fear class actions, but it is possible that the investment community can receive from a company a patently incorrect statement of material fact which, on discovery of inaccuracy, tanks the share price, and not have any liability to injured shareholders.

SEC vs GAI– Say What?

In speeches in July and September, activist SEC Chair Gary Gensler has staked out an alleged case for SEC regulation of the new breed of Artificial Intelligence.  One would not expect to find the SEC delving deeply into AI, other than as part of its normal disclosure regime to make sure that all facts stated are complete and accurate in the offering and trading of securities.  After all, so many other Federal agencies are more closely implicated; witness the recent Executive Order previously reviewed in this space.

And indeed the brief for SEC involvement as stated by Gensler seems rather thin.

Since the data underlying AI can reflect bias, the SEC wants to make sure that potential investors will not be denied access to public markets based on bias in broker-dealer policing.

The SEC sees itself as interested in making sure that anyone’s use of AI does not breach the intellectual property of a third party.  Is that not a disclosure issue covered by current regulation?  And a matter for the courts?

There is a fear that only a few AI providers will achieve AI dominance in providing analytics to investors, causing distortion in the market, a herd effect or causing a recession.  Is this not an anti-trust issue, and how does that differ from the small number of dominant advisory services today?  And what is the alternative if in fact good AI, in everyone’s hands, creates more intelligent investors? Would the SEC ban AI because it is so good at advising?

The SEC already regulates brokers against use of analytics to establish systems that benefit brokers.  Brokers already are required to “know their customer.”  Trading markets should welcome precise analytics. Often accused of being a regulator in search of something to regulate, the SEC seems unhappy at being left at the AI starting gate, as if regulating AI is a race among government agencies.

Posted in AI

BlackRock Reports Gender-Diverse Corporations Perform Better

Yesterday BlackRock, the huge institutional investor, reported that “most gender-balanced companies [over the last 8 or 9 years] outperformed the least diverse companies by 29%, as measured by average return on assets.” Those companies with women in leadership and who take longer maternity leaves lead the pack.

Balanced middle management, by gender, also added to improved performance.

Not only do gender-balanced companies earn more, but female participation resonates in investment fund management (10.5% better over the last 16 years) and in startup success (twice as much return for invested dollar over male founders).

Other take-aways: the key is balance, not female preponderance; female representation deteriorates with seniority (glass ceiling).

One logical conclusion based on data may not be that women are necessarily superior to men in management of existing businesses, but that the advantage comes from balanced gender synergy. We may speculate that this result is driven by application of different viewpoints, life experiences and, perhaps (this is guess) temperament. (I hasten to add that I do not ascribe any such  difference in temperament to biology, as that difference may derive from life experiences.)

One is tempted to generalize this study as likely applicable to the discussion in the immediately prior post, discussing corporate efforts to eradicate corporate systemic racism to improve performance.  Such a generalization is appealing but as far as I know is not data-supported, and the BlackRock report does not so speculate.

Finally, since we are free to suggest and speculate, it may be possible that the superior performance of female money managers and corporate founders may derive from women being put through a more rigorous vetting before they attract capital, eg a reflection of suspicion of female ability; such a reason may subject women to a higher standard.  Or maybe, they are simply more skilled after all……

 

Systemic Racism Backlash: Board Response

In the face of growing backlash against DEI efforts in general and questions about the value of corporate positioning to root out alleged systemic racism, yesterday the New England Chapter of National Association of Corporate Directors presented a program on how Boards of Directors can best continue the fight against racial bias.

It should be pointed out that the positioning of the program was an assumption that systemic racism is present in corporate America, and that Boards needed to sharpen their attack notwithstanding that current events seem to attack that assumption: there is perceived to be a growing resistance to “woke” thinking, a belief that instituting racial preferences in hiring and promotion by definition disadvantage non-minority populations, and a presumed impact of the recent Supreme Court decision striking down previous use of equality metrics as practiced by colleges and universities.  As to the last point, everyone agreed that such thinking will in fact invade the corporate board room.

Harvard Professor Robert Livingston set the stage by identifying the perceived problem and the rationales available to decide to fight against racism in the corporate setting: it is morally correct, it is good for business as diversity improves outcomes, it is important in hiring and retaining employees. It was noted that while individual polls that favor anti-racism efforts are premised often on the moral case, 80% of Fortune 500 companies explain their programs based on the business case.  Unanswered was an explicit examination of the reason that large corporations adopt a business rationale; one expects that such an approach insulates against shareholder resistance where maximizing profit is important and some shareholders think that the cure of preference is worse than the problem itself. It does seem clear that corporate America remains generally aligned in the camp that systemic racism in business exists and is a bad thing; thus, the issue really is, how best to implement its eradication.

With that premise as grounding, the most interesting argument for how best to execute involves the recognition that hiring from the best schools or hiring people with the best grades, rigorously applied, tends to perpetuate racial bias.  It was suggested the companies must recognize that students without the educational and economic advantages of majority populations will not get into better schools and will not test higher and will thus look to be less attractive hires or promotion candidates.    It is, therefore, best to identify a band of favorable candidates, taking into account all factors including the challenges faced and overcome by those from less advantaged backgrounds, and then apply conscious decision to hire widely but only within that expanded “band” of candidates.

Other key take-aways: it is essential for the board to consider willingness of CEOs to undertake this effort, and essential for the board to “have the CEO’s back” in support; perhaps, make racial equity part of the corporate mission.  Companies without strong CEO buy-in fail in the fight against racism within. CEOs and leadership teams need to understand that the effort is “not about you” and your own initial views, but rather in furtherance of corporate goals. This thinking needs to exist also in the Nominating and Governance Committee, which screens board members who ultimately hire the CEO and set corporate tone.

Companies also need to be aware also in dealing with supplier cohorts and business partners–are they in the fight?  And when professional teams pitch a corporation for services, diversity in the pitch team should be considered.

Finally, the panel noted that today there is some growing societal resistance to the effort to fight systemic racism, and also a cadre of companies which adopt a stand against racism but fail to execute on the ground.

:::::::::::::::::::::::::::::::

Aside from the moral perspective, do diverse teams result in better corporate performance?  This morning, as I write the above post, I note in one of my legal information services a report claiming improved corporate business performance in cases of at least gender diversity; post will follow if you are interested.

Update: Federal Regulation of AI

On October 30, the US Administration issued a 54 page detailed Executive Order setting forth a regulatory scheme to control the development, use, safety and accuracy of AI.  This Order fulfills the promise contained in the July Administration release assuring the public that the leading US AI companies had agreed to cooperate and lead in such an effort.

The new order is so detailed that it does not lend itself to summary in a blog post; and no doubt there will be dense press and other coverage of its details.  The over-all takeaways are these:

The Order reads like legislation.  It is 54 pages long with four pages of definitions.  It charges agencies of the Federal government to take affirmative action within specific time lines (often within  90 to 270 days) to assure the following: wide use of databases so that AI is less likely to be biased; establishment of incentives for small and emerging companies to focus on and be funded for focus on AI; protection of US security, infrastructure and personal information; specific focus on child protection; streamlining of patent standards and criteria for admission to the US of AI experts to drive innovation aiming at US primacy world-wide in AI development. Sections require protection of US workers, measuring  negative impact on the workforce and training to obtain new job opportunities.   Specific requirements are placed on specific US government individuals and agencies: cabinet heads of departments and others below cabinet rank.

A White House AI Council is established to monitor all of the above, consisting of 12 cabinet officers, the US Attorney General, and directors of key government agencies (NSF, OMB, etc.)

From a VC standpoint, there will no doubt be a plethora of companies seeking and obtaining government support in the race to develop and make safe the AI used by the government and by the economy generally.  VCs will need to obtain sophisticated expertise in AI to sort through the contenders when making their “bets.”

From an overall viewpoint, moving away from the details, here are some things to watch:

This is a piece of legislation in the form of an executive order, granular and detailed and calling for long-term government action and funding.  I believe it is structured in a way intended to move forward without Congressional review or approval.  I never express legal opinions in these posts and I particularly offer no view as to whether this Order exceeds Presidential authority.  Nor do I speculate as to the success of any challenge mounted to its provisions, although frankly I expect that the Order covers so much that there will be such challenges, particularly with respect to expenditures.

Moving from speculation into quicksand, I wonder how the Republican leadership will react.  There is indeed much in the Order declaring a desire to make US business the world leader in AI, but there are many other elements that one might suspect do not fit squarely into current Republican leadership thinking: coordination with international standards; scope of protection of labor; focus on making sure that the AI landscape is sensitive to gender rights, DEI and civil rights generally.  One branch of Republican thinking, in my personal opinion, tends to be adverse to strong activity by the Federal government in such matters.

And indeed and generally, this Order is a “big government initiative.”  Persons adverse to strong governance of markets and favoring the shrinking of government and leaving things to corporate America will find much to make them nervous about the completeness of the program set forth in the Order.  While it is unthinkable to have State-by-State regulation of AI, putting the Federal alternative on paper, as is done by the Order, does present a blueprint for huge Federal action and indeed regulation of myriad aspects of investment, business, labor and social policy.

I am quite sure that my law firm, Duane Morris, will be posting alerts from time to time about various aspects of government action driven by this Order.  May I invite you to go onto our firm website to receive these updates?  We have a robust practice group in AI, designed to keep our clients and friends updated on significant developments in law and regulation.  http://www.duanemorris.com

(To repeat the obvious and for avoidance of doubt, the legal and political views and content of my posts are personal and do not reflect the views of Duane Morris, LLP.)

 

Posted in AI

Law Firms Gun-Shy on AI

Law firms are slow to adopt AI in their business models; perhaps they are too aware of the risks, particularly in a profession where keeping secrets –concerning individuals or businesses — is of constant focus.

A just-published joint study by Sales-Force and Litify surveyed law firms of various sizes and practice areas about their AI use. While it is recognized that some day AI will be widely utilized in the provision of legal services, a palpable majority of lawyers reported that the industry was just not ready to tackle the AI revolution. Reasons included risk to privacy of information and lack of staff capability to use AI effectively or safely.

It seems that, in fact, many lawyers already are using AI without specific focus on that fact, and seemingly in areas that may or may not implicate data security. Amount of usage increases with size of the law firm, suggesting that larger practices must coordinate more people and information and apply more budget to AI solutions.  But that does not suggest that larger firms in fact use AI significantly in providing actual advice or legal documentation. For those firms focusing on their actual AI use, 95% do report some savings in time.

Interestingly, in-house legal teams are ahead of outside counsel in use of AI, perhaps because the corporate setting is more used to installing new technologies and more comfortable funding that effort.

Overall, AI seems most applicable to document preparation and for reviewing or summarizing documents or evidence.  There is awareness that AI can hallucinate and invent laws or court cases, so that is one area where lawyers are moving most cautiously; there is one famous case where a lawyer ws held accountable when he filed a litigation brief citing court decisions that simply did not exist.  Seems using AI as a tool requires the human touch at the end to keep the process honest.

Ultimately it seems inevitable that AI will improve, controls as to accuracy and security will become more robust, and law firms will find themselves deep into AI dependence, as indeed it is likely the lot of all of commerce.  Prospectively, the report does show optimism that in certain areas of the practice AI ultimately will be a major contributor to  accuracy, reduction of legal costs,. and affording access to legal services.

This practitioner will not live to see your company’s law robot appear in court and cross-examine an adverse witness, and I cannot imagine how a judge could find a robot in contempt of court for asking too many leading questions, but technology is sort of like a leaking basement: pretty soon it finds its way into every corner.

Posted in AI

What Businesses Need to Do Today About AI

Virtually all businesses, whether they focus on this fact or not, are affected by or are actually using GAI (generative AI) right now, and incurring possible legal exposure.  The involvement and thus potential risk is broad.

I know this sounds like the usual lawyer hysteria whenever something new hits the business world, followed by the admonition that you had better lawyer up right away.  The problem today is that the warning I am giving you is, this time, subtle but real, perhaps hidden but nonetheless significant and without an easy fix.

Summary: GAI,  as provided by many vendors including  the company Open AI which offers the most ubiquitous service CHAT GPT, is not generally policed by Federal law, is policed lightly and unevenly by certain States and cities, and is embedded in many things that affect your company:

* meetings you attend on line are conducted by services which take notes for you and those notes, which may contain business secrets or personal information, get dropped into the data bases that educate AI; information you want to keep private may pop up as an answer to someone else’s question posed to an AI engine

*your vendors provide advice or product designs which, unknown to you, were generated by AI, which is often inaccurate and which also may contain information or images which were scooped up by your vendor’s AI in violation of someone else’s copyrights or protected trade marks

*your contracts do not address allocation of, or protection from, risk, if you are sued for infringements, disclosure of personal information or breach of contract, either because of what you have inadvertently done or what your supplier has inadvertently done–and your insurance is not geared to protect you from such risks

*your HR people, or the firms advising them, may be screening employees using AI which contains implicit bias as it reviews resumes, sets salary recommendations,  or uses recognition AI to screen facial reactions of candidates, in violation of labor laws

*an AI-generated face or group of faces used in your publicity or advertising may be based on a well-known face in the public realm to such a degree that they are confusingly identified as that public figure

This is just a smattering of a highly complex set of issues.  These are not speculative; there are lawsuits today making claims. that come as a complete surprise. against inadvertently liable companies.  There are steps you can take, in terms of internal practices, contracts, insurance, disclaimers and policies to protect your business.  How do you navigate these issues today?

Answer: While I do not generally tout my firm’s materials (yes, we are a great firm but the purpose of this blogsite is to alert readers, not sell our services), I have just seen the first of a series of one-hour presentations by members of the Duane Morris AI practice group that is enormously granular and educational, and is replete with slide decks to help you absorb information that is coming at you from unexpected directions.

In my original post of the above, I concluded by saying: “I hope in a few days to obtain and share a link to this presentation.”  Turns out, due to mechanics, I can send you the link but must do so by email.  If you would like a forward, please email me at the below and I will send the link to you:

[email protected]

 

 

Posted in AI

US Attacks Roll-Ups Under Anti-Trust Laws

As part of the Administration’s expanding activity in anti-trust, which includes new proposed Guidance and changes in pre-merger filings under Hart-Scott-Rodino for larger transactions, the FTC for the first time is going to court to make anti-trust violation claims against companies set up by  Private Equity funds as vehicles for effecting roll-ups of competitors.

A step back–generally anti-trust claims arise by reason of a transaction involving two large-enough entities to trigger required disclosure to the Feds.  Since roll-ups typically involve a party to be acquired the size of which entity is small, below the threshold of required filings, the transaction is invisible to the regulators.  So the government now is claiming that, regardless of size of any given acquisition, the overall roll-up plan violates that anti-trust statute that protects against unfair methods of competition.  This approach is said to be directed today primarily against life science and health care market roll-ups.

In a recent unique lawsuit, the FTC attacked an effort to roll up anesthesiologists in parts of one State, Texas.  Why did the FTC attack this particular set of roll-up transactions to establish principles it can use in future cases?  The lawyer take-away is because the company’s own records revealed that the sole purpose was to keep prices high, not to create better patient service or create savings or economic benefits that could be passed on to patients or doctors.  One of my partners commented: “your statements and documents matter.”

In this case, company documents made clear that the goal was to increase prices. And the company doing the roll-up also made agreements with non-rolled-up providers in key areas that those providers would not under-price services below what the roll-up would charge.  The lawyer take-aways thus are: PE’s in their literature they use to raise capital or to attract would-be acquisition targets, must show benefits to be derived other than increased financial profit driven by de facto monopoly.

My view is that while in this particular case the PE’s company left a variety of sexy smoking guns lying around which no doubt tempted the FTC to claim unfair competition, the strict legal analysis is really very tight against the typical roll-up transactions.  The de facto result of roll-up operations is efficiency that will in fact drive profit, and every one of such deals will have financial projections that show profit from such efficiency; investors will see these projections and will be induced to fund the roll-up by reason of profit; if roll-up documentation show profits, even if there is also stated that this deal is intended to reduce costs for patients or fees for underpaid staff, the reality of profit through efficiency will be evident.

Thus I assume that the fundamental lesson here is that for the first time the government is moving against the model.  Deals not large enough singly to trigger anti-trust mandatory reporting now can be aggregated and thus attacked as unfair competition. Attention to the tone and content of all related disclosures and documents may be a practical escape hatch in some instances;  but, what is important here that the Fed is taking action that attacks one of today’s fundamental business practices.