AI Due Diligence in Acquisitions

If you are acquiring another business, or if someone is trying to buy your own company, what questions will be asked as part of acquirer due diligence?  The list is a matter of common sense; although AI is complex and troublesome, the questions are clear:

  1. May I have an inventory of Company use of AI in any form, whether Company developed or sourced from a third party?
  2. May I have a a copy of all internal Company policies concerning use of AI, including relating to AI embedded in any software sourced from third parties?
  3. May I please see your diligence reports on AI conducted relative to all third parties whose software interacts with the Company, and third party policies relative to information those third parties will have maintained internally relating to your Company information.
  4. Does your HR department use any AI in hiring?  Same information for hiring agencies you use.
  5. May we see all materials used by the Company to provide training to employees about AI?
  6. May we have copies of all policies or riders from third party insurers relating to claims based on AI?
  7. Do you have any information relating to the possibility that your  Company data is in the possession of any trade association, marketing group or other party that may be subject to being scraped by third parties?
  8. Are you party to any litigation, or have you received any communications or governmental contacts, claiming possible or actual breach of your data systems?
  9.  List all third party entities which have access to any part of your Company computer records or information.

By the way, the above also provides to your business a list of questions you should be asking yourself so that you are not at risk from AI issues during your own independent operations.  Do you have an identifiable person with a job description relating to AI risks?  Does your Board have someone with sensitivity to these issues; does management have in-house expertise or a third party AI consultant?  Does your Risk Committee (or other committee such as F&A which has been assigned risk responsibility) have a  grasp of AI risks, and does the risk analysis avoid departmental informational silos and provide for some consideration of all reports touching AI to be considered  together, holistically?

No need to panic, but–no time to be asleep.

Mandatory Federal Filing Requirements for All Businesses

The Federal Government requires virtually all businesses formed in the United States before this year-end, regardless of form of entity and regardless of line of enterprise, to register during 2024 and disclose all owners; if you form a new business after the first of the year, you will have 30 days to file.  Failure to do so can be criminal and in any event non-compliance carries a $500/day civil penalty.

The reason is to fight crime and money-laundering.  The fact that you are not a crook obviously does not protect you– the Feds know lots of things but do not know (obviously, since Congress needed to pass this law) if you are an honest operation.

I know this seems weird– unlikely — a bother.  You would be merely correct.  If I were a crook, I am already on the wrong side of the law so perhaps I am not going to register.  Or do I register, figuring that I am hidden within the estimated 32,000,000 initial filings?

Hopefully the below link will connect you to my firm’s initial alert regarding these filings; and you can sign up for our alerts and then you will be updated as needed.  Email me if you are having trouble or have any current questions.

https://www.duanemorris.com/alerts/you_cant_hide_corporate_transparency_act_dragnet_0923.html

AI Driving Biotech Innovation

Today’s Boston Business Journal contains a fascinating article about a local company, Generate Biomedicines, now utilizing machine learning software to create “purpose-built proteins capable of performing any desired function” through protein sequencing.  Although this stated result may prove over-broad, the goal is to invent proteins addressing a wide spectrum of diseases.  They now have a Phase 1 running on a monoclonal COVID-19 antibody.

That kind of ambition (and performance) has attracted a partnership with Amgen and a just-concluded Series C raise of $273M.  AI-based medical initiatives not surprisingly can attract significant financial support, even at a relatively early stage.

Posted in AI

Near-Term in American Business: Predictions by CEOs

CEOs in the defense industry, hospitals and  real estate  identified the challenges before their businesses at a panel presentation today sponsored by National Association of Corporate Directors–New England.  (Disclosure: I am on Advisory Board of, and Secretary to, NACD–New England).

The speakers: Gregory Hayes, Chair and CEO of RTX (Raytheon and two other major defense/aeronautics manufactures with c $75B annual sales and 186,000 employees); Ann Kilbanski, CEO of MassGeneralBrigham; and Ben Breslau, chief research officer of JLL (world-wide real estate company engaged inter alia in planning and finance of major realty assets).

Points of general  agreement:

AI will impact everything at an accelerating pace and this time next year you will be surprised to learn this is a “today” issue: disruption of labor force, need for retraining, greater efficiencies impacting how things are manufactured, how manufactured products operate, what sort of real estate assets are needed (and not needed).

Labor force is a major player.  RTX lost c 24,000 works last year and hired more than 34,000; labor markets are fluid, the fall-out of COVID has reshaped needs for real estate and ability to hire for particular roles, senior managers are mobile and particularly difficult to find/hold, costs of living in major cities for labor (as well as companies) tends to drive people and businesses to lower-cost venues (the South, Iowa [one of RTX’s 3 major components is in Iowa]).

The elephant not in the room: no one used the “R” word.  When I reported back to my corporate department after the meeting that “recession” was not discussed, there was interest in this absence of mention, given the continuing though not universal drum-beat about this national and global risk.

Medical Care Highlights

The system of health care delivery in the US is broken.  We remain somewhat stuck in the financial model of “fee for service.” What we need is a structure that pays for a continuum of care that has better long-term outcomes: preventative services, care delivered promptly by Zoom or similar, clinics in areas away from a centralized hospital, support for delivery of care within the home itself.  Some types of care are already on that model: c90% of mental health care today is delivered on line.  Medicine needs to come to the patients.

 

Real Estate Highlights

This discussion touches issues as which  the anecdotal evidence is general knowledge..

While people are returning to the office, we do not know how far that trend will go.  (Per RTX, huge percentages of their people are remote [40%] and many are hybrid-lite, which of course impacts space needs.) In cities, new properties with amenities are much more popular than older buildings.  Older city buildings need to be repurposed, used for housing perhaps, but maybe demolished; non-use and high interest rates drive down property values to the value of the ground alone, so you demolish.  Interest rates will stabilize at some point and at some future time likely drop but specifics were unclear.  In a year, banks will clean up their finances and with rates stabilizing some values may come up.  Grade B, C and even A-minus city properties and malls anywhere are at risk and candidates for repurposing or demolition.

Aviation, Defense Contracting, Space (RTX lines of business)

Air traffic will increase radically in the next decade. (Still, 80% of Americans have never been on a plane (!).)  We lack pilots now, will get worse.  Federal Air Traffic Control is outdated and does not use modern technology.  Result is going to be very bad. Must use AI.  Need to install new ATC technology, a multi-year $10B project.  FTA was without a chair for three years until recently, which has been a delay.   Main reasons for flight delays: lack of pilots and ATC lacks AI technology, which is here today, and could easily reroute many delayed flights.  Planes of the future do not need pilots. Government does not know how to certify a self-flying aircraft so insists on pilots; passengers of course also wary.  One pilot on the ground can control many aircraft with AI.  Ideal on-board flight crew of the future: either no  human, or one human and a dog (the dog keeps the pilot from touching any of the controls).

War in Ukraine: RTX products shoot down long-range and near-range missiles/drones.  90%+ success with missiles, 98% success with short-range objects (<5 miles).  War is going to last 2-3 more years.

China: tension point is Taiwan which is 100 miles from China shore.  China has tech we do not have, super-high speed air weapons (a mile a second in some cases) that we do not have technology to intercept; working on it. Goal of some Chinese weapons: to take out US military capacity at Pearl Harbor before it can be deployed, in case China make a move, so what US then could do would be too late (sound familiar?).  This last section on China was something that neither I nor the folks with me had ever heard before.

These final two paragraphs affect supply chain design, onshoring and near-shoring (RTX has 14,000 discrete suppliers), but also have geopolitical ramifications that are unsettling in both  economic terms and personal (risk of major war) terms.

 

Board Composition in the Coming Years

This is the first, brief post based on this morning’s program at  Boston’s Seaport Hotel sponsored by National Association of Corporate Directors–New England.  (Disclosure: I am on the Advisory Board to, and Secretary of, NACD-New England).

At the end of a program during which three major business leaders opined as to the near-term future of aspects of American business, an audience question in substance was this: “Given all the changes and issues upcoming, what should a company be looking for when building out its board?”

Answer: an expert in “disruption.”  Given the numerous factors mentioned (AI, state of the world, upheaval in labor markets and in real estate,  interest risk, geopolitical threats), it would be best to add someone who can think about the future in terms of the unexpected.  Implicit in this advice is that current board members, from an inside vantage point and often with deep knowledge in only one or two areas (financial, industry, science, HR, geopolitics, whatever), may miss how multiple factors come together and create a problem not identified in any one information silo.

In any matrix for analysis of board needs that I have seen,  mentioning board gaps to be filled, while I have seen “risk” mentioned I have never seen a board need articulated almost in a sci-fi manner; the specific word used was someone who thought in “scenarios.”.  Without second-guessing a distinguished panel (everyone well above my pay grade, for sure!), I do wonder about how that putative board member would fit into the tone and congeniality that boards strive for in order to avoid needless diversions and frictions.

For anecdotal summary of several interesting highlights from the panel discussion, see next following post

SEC vs. NFT

The law relating to nonfungible tokens continues to limp along without clear guidance.  In the past, the SEC has taken the general position that if you sell an NFT based on the express or implied promise of making a profit by reselling it, you are selling a security, even if the form of the NFT is a drawing or work of art.  And the SEC has levied fines on celebrity endorsers of such offerings who receive undisclosed direct or indirect fees for advertising the merits of buying that NFT.

Without presuming to being able successfully to explain the complex law here, the whole legal issue is polluted (or informed, depends on how you see it) by a 1946  U. S. Supreme Court case (“Howey”) that held that selling interests in a physical orange grove was the sale of a security because the “investor” relied on the efforts of the seller to run the grove for a profit and share that profit with the investor.  If you rely for profit on the effort of a third party, whatever you call the investment vehicle it is still akin to buying a share of stock and relying on the corporation to run a business and share the profit with shareholders.

Particularly where the proceeds of the NFT are to be used by the issuing company to build an asset that creates a potential profit, the SEC has taken the position that this is the same as investing in a company in hopes that shares of stock can be resold at a profit.

In an August enforcement action, the SEC forced a $6 million settlement with a company issuing NFTs to fund building a media platform, and this month levied a million dollar fine against a company developing a show.

SEC Chair Gensler has been unwilling to release clear guidance to the marketplace as to what precisely the SEC consider a security, and such an SEC position is not unusual– the forms of deals (whether NFTs or other offerings) are varied and  fluid, and the SEC often has waited years to see what is happening in the marketplace before giving guidelines.  However, since many NFTs represent efforts to raise money for artistic projects, the question is whether the SEC’s lack of guidance will stifle artistic creativity.  The definitional issues here are beyond both the scope of this post and the scope of clear understanding.

Finally and predictably, the two-person Republican minority on the SEC has dissented with respect to such above-referenced enforcement efforts, claiming that “[r]ather than arbitrarily bringing enforcement actions…, we ought to lay out some clear guidelines for artists and other creators who want to experiment with NFTs as a way to support their creative efforts and build their fan communities.”

 

 

New Copyright Case: AI-Generated Art Cannot Be Protected

I have previously posted about the issue of whether US copyright protection is available for art or text created by use of artificial intelligence.  On August 18, the United States District Court for the District of Columbia denied copyright protection to the creator of a piece of visual art where the owner of a computer system claimed copyright and listed the computer system as the author.  As owner of the computer system, the applicant sought that the computer’s copyright should transfer to him as owner of the machine.

Copyright was denied as the Court held that human authorship is essential for a valid copyright claim.

In this case the claimant invented his own programs designed to create art “akin to the output of a human artist.”  The piece in question was modestly entitled “A Recent Entrance to Paradise.”

It should be noted that as soon as a work of art or writing is created it has an automatic copyright and the author can prevent others from using it; registration confirms that the work always has the benefit of copyright protection and that damages can be obtained for unauthorized use.  If registration is formally denied, that means the work had no copyright protection from the moment it was created.

There is a bit of procedural law at work here also.  When you go to court to question an administrative decision of a governmental agency, the court does not start from ground zero to consider the merits.  Rather, the court must recognize the expertise and mission of the governmental agency and thus the “only question properly presented, then, is whether the Register [of copyright] acted arbitrarily or capriciously” or violated proper procedure in the process of reaching its decision.

The Court here reviewed the history of copyright and concluded that the denial was consistent with past practice, rejecting an analogy to protectable photographs created by machine because in photographs there is substantial human input (the photographer’s “mental conception” of the result, the posing, the setting of  subject within the frame, the use of light and shadow).

The opinion is well written and has some rich historical interest, discussing the treatment of copyright in the Constitution and quoting James Madison;  The curious can access this case, Stephen Thaler vs. Shira Perlmutter, Register of Copyrights, at the US District Court of DC, Civil Action No. 22-1564 (BAH).

 

 

AI Regulation by the States?

In the face of Federal US failure to actually regulate AI, the states are beginning to act on their own.

We have previously posted about the cautious pace of US Federal regulation of AI to limit risk and fraud, and speculated that such a hands-off approach to new technology is an American characteristic: let new tech flourish and do not interfere, as in the long run it will drive US primacy in the business and scientific worlds.  We have also noted that the EU has been far more aggressive in putting current regulation in place.

Noting the failure of the Feds to take action, the National Conference of State Legislatures recently published a report discussing what AI regulation should look like in financial services, healthcare and education. In fact, fourteen states have already enacted some legislation relating to control of AI, and another 15 have introduced legislation, much focused on setting up research committees to work on legislative initiatives.

Among areas of focus in the report: preventing use of AI in hiring due to fear of bias in the data relied upon; monitoring algorithms utilized in making decisions in regulated areas including education, housing and banking; protecting intellectual property rights, which can be trampled when AI sucks up information that is proprietary and incorporates it into machine analyses; establishing a common vocabulary for states to use in enacting legislation so there can be understanding and coordination.

This last suggestion of a common vocabulary, which is facially logical, discloses the fundamental weakness of hiving separate laws in a multiplicity of jurisdictions: AI crosses borders continually, companies developing and using AI do not operate in or impact just one state, and for lawyers and businesspersons trying to comply with multiple laws is expensive and also not wise.  What is needed is National legislation that preempts the field, replaces state laws and creates a uniform national standard.

Now the Federal Congress can let a hundred flowers bloom at the state level, evaluate impact and wisdom of these various schemes, and then adopt uniform laws that supersede inconsistent state regulation, having used state law temporarily as a laboratory for evaluating what is wise and what is not.  That leave chaos at the beginning and resistance by states to abandon what their own legislatures have enacted and thus feel is simply too good to lose.

But between a propensity to move slowly in Federal tech regulation, and the complex problems presented by regulating AI, we may have no choice but to end up with a Balkanized AI regulatory landscape.

 

Posted in AI

A Personal Note: A New Publication

Excuse this brief personal announcement: I have just published my seventh book, and my first novel.  It is an adventure novel based on the release of a weaponized bio agent and it can be ordered through Amazon (best found by searching my full name, Stephen M Honig).

If you order it and read it, do let me know what you think.  Thanks.

Corporate Directors Need to be GAI-Aware

This post is a recommendation to corporate directors and C-level executives to take a look at the magazine Directorship, which is available to members of the National Association of Corporate Directors.  [Alert: I am a member of the New England Chapter of NACD and work on our educational programming.]  For those without access, see below for an  outline of issues which should be on your agenda.

The recently released issue designated “Quarter Three 2023” has a cover story on “Artificial Intelligence–What Boards Need to Know” that ought to be  required reading for corporate directors. Passing the thought that anyone who sits on a Board has a fiduciary duty to be knowledgeable as to major technological changes and that NACD is a great way of fulfilling that duty, AND passing the fact that public companies are responsible to public investors (as enforced by the SEC) to fully disclose the uses and risks and impact of AI within a reporting company, simple logic tells all directors that they need to watch carefully several areas at a minimum:

The use of AI by your company and how it is controlled, to avoid errors in management, planning, marketing, procurement and labor matters;

The impact on labor force so that your workers are attuned to the use and misuse of AI, not to mention the impact on headcount and HR planning and implicit hiring bias through use of certain AI-supported tools;

The legal liability and attendant cost of errors occurring by reason of misuse of AI, or reliance without verification of actions based on AI research (as the most powerful AI tools, GAI, are prone to invent answers on their own (called “hallucinations”);

Evaluation of AI through a cybersecurity lens.

Boards need either a member attuned to AI use and risks or an extrinsic source to guide it.  Underneath the hype and the scare scenarios, this is an informational and operational tool that will in fact drive progress and profit in successful companies, and the proper use of this tool within a company is going to be part of basic blocking and tackling for boards of the future.

I recommend either a board committee to deal with GAI issues or a clear statement to an extant board committee charging responsibility and asking for periodic reports to the board as to issues perceived; this could go to a Risk Committee or a Technology Committee for example.  I would not recommend sending this to Audit, as that committee is typically over-burdened already and as the qualifications for serving on F&A are not necessarily congruent with being GAI-qualified.