Update: Federal Regulation of AI

On October 30, the US Administration issued a 54 page detailed Executive Order setting forth a regulatory scheme to control the development, use, safety and accuracy of AI.  This Order fulfills the promise contained in the July Administration release assuring the public that the leading US AI companies had agreed to cooperate and lead in such an effort.

The new order is so detailed that it does not lend itself to summary in a blog post; and no doubt there will be dense press and other coverage of its details.  The over-all takeaways are these:

The Order reads like legislation.  It is 54 pages long with four pages of definitions.  It charges agencies of the Federal government to take affirmative action within specific time lines (often within  90 to 270 days) to assure the following: wide use of databases so that AI is less likely to be biased; establishment of incentives for small and emerging companies to focus on and be funded for focus on AI; protection of US security, infrastructure and personal information; specific focus on child protection; streamlining of patent standards and criteria for admission to the US of AI experts to drive innovation aiming at US primacy world-wide in AI development. Sections require protection of US workers, measuring  negative impact on the workforce and training to obtain new job opportunities.   Specific requirements are placed on specific US government individuals and agencies: cabinet heads of departments and others below cabinet rank.

A White House AI Council is established to monitor all of the above, consisting of 12 cabinet officers, the US Attorney General, and directors of key government agencies (NSF, OMB, etc.)

From a VC standpoint, there will no doubt be a plethora of companies seeking and obtaining government support in the race to develop and make safe the AI used by the government and by the economy generally.  VCs will need to obtain sophisticated expertise in AI to sort through the contenders when making their “bets.”

From an overall viewpoint, moving away from the details, here are some things to watch:

This is a piece of legislation in the form of an executive order, granular and detailed and calling for long-term government action and funding.  I believe it is structured in a way intended to move forward without Congressional review or approval.  I never express legal opinions in these posts and I particularly offer no view as to whether this Order exceeds Presidential authority.  Nor do I speculate as to the success of any challenge mounted to its provisions, although frankly I expect that the Order covers so much that there will be such challenges, particularly with respect to expenditures.

Moving from speculation into quicksand, I wonder how the Republican leadership will react.  There is indeed much in the Order declaring a desire to make US business the world leader in AI, but there are many other elements that one might suspect do not fit squarely into current Republican leadership thinking: coordination with international standards; scope of protection of labor; focus on making sure that the AI landscape is sensitive to gender rights, DEI and civil rights generally.  One branch of Republican thinking, in my personal opinion, tends to be adverse to strong activity by the Federal government in such matters.

And indeed and generally, this Order is a “big government initiative.”  Persons adverse to strong governance of markets and favoring the shrinking of government and leaving things to corporate America will find much to make them nervous about the completeness of the program set forth in the Order.  While it is unthinkable to have State-by-State regulation of AI, putting the Federal alternative on paper, as is done by the Order, does present a blueprint for huge Federal action and indeed regulation of myriad aspects of investment, business, labor and social policy.

I am quite sure that my law firm, Duane Morris, will be posting alerts from time to time about various aspects of government action driven by this Order.  May I invite you to go onto our firm website to receive these updates?  We have a robust practice group in AI, designed to keep our clients and friends updated on significant developments in law and regulation.  http://www.duanemorris.com

(To repeat the obvious and for avoidance of doubt, the legal and political views and content of my posts are personal and do not reflect the views of Duane Morris, LLP.)

 

Posted in AI

Law Firms Gun-Shy on AI

Law firms are slow to adopt AI in their business models; perhaps they are too aware of the risks, particularly in a profession where keeping secrets –concerning individuals or businesses — is of constant focus.

A just-published joint study by Sales-Force and Litify surveyed law firms of various sizes and practice areas about their AI use. While it is recognized that some day AI will be widely utilized in the provision of legal services, a palpable majority of lawyers reported that the industry was just not ready to tackle the AI revolution. Reasons included risk to privacy of information and lack of staff capability to use AI effectively or safely.

It seems that, in fact, many lawyers already are using AI without specific focus on that fact, and seemingly in areas that may or may not implicate data security. Amount of usage increases with size of the law firm, suggesting that larger practices must coordinate more people and information and apply more budget to AI solutions.  But that does not suggest that larger firms in fact use AI significantly in providing actual advice or legal documentation. For those firms focusing on their actual AI use, 95% do report some savings in time.

Interestingly, in-house legal teams are ahead of outside counsel in use of AI, perhaps because the corporate setting is more used to installing new technologies and more comfortable funding that effort.

Overall, AI seems most applicable to document preparation and for reviewing or summarizing documents or evidence.  There is awareness that AI can hallucinate and invent laws or court cases, so that is one area where lawyers are moving most cautiously; there is one famous case where a lawyer ws held accountable when he filed a litigation brief citing court decisions that simply did not exist.  Seems using AI as a tool requires the human touch at the end to keep the process honest.

Ultimately it seems inevitable that AI will improve, controls as to accuracy and security will become more robust, and law firms will find themselves deep into AI dependence, as indeed it is likely the lot of all of commerce.  Prospectively, the report does show optimism that in certain areas of the practice AI ultimately will be a major contributor to  accuracy, reduction of legal costs,. and affording access to legal services.

This practitioner will not live to see your company’s law robot appear in court and cross-examine an adverse witness, and I cannot imagine how a judge could find a robot in contempt of court for asking too many leading questions, but technology is sort of like a leaking basement: pretty soon it finds its way into every corner.

Posted in AI

What Businesses Need to Do Today About AI

Virtually all businesses, whether they focus on this fact or not, are affected by or are actually using GAI (generative AI) right now, and incurring possible legal exposure.  The involvement and thus potential risk is broad.

I know this sounds like the usual lawyer hysteria whenever something new hits the business world, followed by the admonition that you had better lawyer up right away.  The problem today is that the warning I am giving you is, this time, subtle but real, perhaps hidden but nonetheless significant and without an easy fix.

Summary: GAI,  as provided by many vendors including  the company Open AI which offers the most ubiquitous service CHAT GPT, is not generally policed by Federal law, is policed lightly and unevenly by certain States and cities, and is embedded in many things that affect your company:

* meetings you attend on line are conducted by services which take notes for you and those notes, which may contain business secrets or personal information, get dropped into the data bases that educate AI; information you want to keep private may pop up as an answer to someone else’s question posed to an AI engine

*your vendors provide advice or product designs which, unknown to you, were generated by AI, which is often inaccurate and which also may contain information or images which were scooped up by your vendor’s AI in violation of someone else’s copyrights or protected trade marks

*your contracts do not address allocation of, or protection from, risk, if you are sued for infringements, disclosure of personal information or breach of contract, either because of what you have inadvertently done or what your supplier has inadvertently done–and your insurance is not geared to protect you from such risks

*your HR people, or the firms advising them, may be screening employees using AI which contains implicit bias as it reviews resumes, sets salary recommendations,  or uses recognition AI to screen facial reactions of candidates, in violation of labor laws

*an AI-generated face or group of faces used in your publicity or advertising may be based on a well-known face in the public realm to such a degree that they are confusingly identified as that public figure

This is just a smattering of a highly complex set of issues.  These are not speculative; there are lawsuits today making claims. that come as a complete surprise. against inadvertently liable companies.  There are steps you can take, in terms of internal practices, contracts, insurance, disclaimers and policies to protect your business.  How do you navigate these issues today?

Answer: While I do not generally tout my firm’s materials (yes, we are a great firm but the purpose of this blogsite is to alert readers, not sell our services), I have just seen the first of a series of one-hour presentations by members of the Duane Morris AI practice group that is enormously granular and educational, and is replete with slide decks to help you absorb information that is coming at you from unexpected directions.

In my original post of the above, I concluded by saying: “I hope in a few days to obtain and share a link to this presentation.”  Turns out, due to mechanics, I can send you the link but must do so by email.  If you would like a forward, please email me at the below and I will send the link to you:

[email protected]

 

 

Posted in AI

US Attacks Roll-Ups Under Anti-Trust Laws

As part of the Administration’s expanding activity in anti-trust, which includes new proposed Guidance and changes in pre-merger filings under Hart-Scott-Rodino for larger transactions, the FTC for the first time is going to court to make anti-trust violation claims against companies set up by  Private Equity funds as vehicles for effecting roll-ups of competitors.

A step back–generally anti-trust claims arise by reason of a transaction involving two large-enough entities to trigger required disclosure to the Feds.  Since roll-ups typically involve a party to be acquired the size of which entity is small, below the threshold of required filings, the transaction is invisible to the regulators.  So the government now is claiming that, regardless of size of any given acquisition, the overall roll-up plan violates that anti-trust statute that protects against unfair methods of competition.  This approach is said to be directed today primarily against life science and health care market roll-ups.

In a recent unique lawsuit, the FTC attacked an effort to roll up anesthesiologists in parts of one State, Texas.  Why did the FTC attack this particular set of roll-up transactions to establish principles it can use in future cases?  The lawyer take-away is because the company’s own records revealed that the sole purpose was to keep prices high, not to create better patient service or create savings or economic benefits that could be passed on to patients or doctors.  One of my partners commented: “your statements and documents matter.”

In this case, company documents made clear that the goal was to increase prices. And the company doing the roll-up also made agreements with non-rolled-up providers in key areas that those providers would not under-price services below what the roll-up would charge.  The lawyer take-aways thus are: PE’s in their literature they use to raise capital or to attract would-be acquisition targets, must show benefits to be derived other than increased financial profit driven by de facto monopoly.

My view is that while in this particular case the PE’s company left a variety of sexy smoking guns lying around which no doubt tempted the FTC to claim unfair competition, the strict legal analysis is really very tight against the typical roll-up transactions.  The de facto result of roll-up operations is efficiency that will in fact drive profit, and every one of such deals will have financial projections that show profit from such efficiency; investors will see these projections and will be induced to fund the roll-up by reason of profit; if roll-up documentation show profits, even if there is also stated that this deal is intended to reduce costs for patients or fees for underpaid staff, the reality of profit through efficiency will be evident.

Thus I assume that the fundamental lesson here is that for the first time the government is moving against the model.  Deals not large enough singly to trigger anti-trust mandatory reporting now can be aggregated and thus attacked as unfair competition. Attention to the tone and content of all related disclosures and documents may be a practical escape hatch in some instances;  but, what is important here that the Fed is taking action that attacks one of today’s fundamental business practices.

AI Due Diligence in Acquisitions

If you are acquiring another business, or if someone is trying to buy your own company, what questions will be asked as part of acquirer due diligence?  The list is a matter of common sense; although AI is complex and troublesome, the questions are clear:

  1. May I have an inventory of Company use of AI in any form, whether Company developed or sourced from a third party?
  2. May I have a a copy of all internal Company policies concerning use of AI, including relating to AI embedded in any software sourced from third parties?
  3. May I please see your diligence reports on AI conducted relative to all third parties whose software interacts with the Company, and third party policies relative to information those third parties will have maintained internally relating to your Company information.
  4. Does your HR department use any AI in hiring?  Same information for hiring agencies you use.
  5. May we see all materials used by the Company to provide training to employees about AI?
  6. May we have copies of all policies or riders from third party insurers relating to claims based on AI?
  7. Do you have any information relating to the possibility that your  Company data is in the possession of any trade association, marketing group or other party that may be subject to being scraped by third parties?
  8. Are you party to any litigation, or have you received any communications or governmental contacts, claiming possible or actual breach of your data systems?
  9.  List all third party entities which have access to any part of your Company computer records or information.

By the way, the above also provides to your business a list of questions you should be asking yourself so that you are not at risk from AI issues during your own independent operations.  Do you have an identifiable person with a job description relating to AI risks?  Does your Board have someone with sensitivity to these issues; does management have in-house expertise or a third party AI consultant?  Does your Risk Committee (or other committee such as F&A which has been assigned risk responsibility) have a  grasp of AI risks, and does the risk analysis avoid departmental informational silos and provide for some consideration of all reports touching AI to be considered  together, holistically?

No need to panic, but–no time to be asleep.

Mandatory Federal Filing Requirements for All Businesses

The Federal Government requires virtually all businesses formed in the United States before this year-end, regardless of form of entity and regardless of line of enterprise, to register during 2024 and disclose all owners; if you form a new business after the first of the year, you will have 30 days to file.  Failure to do so can be criminal and in any event non-compliance carries a $500/day civil penalty.

The reason is to fight crime and money-laundering.  The fact that you are not a crook obviously does not protect you– the Feds know lots of things but do not know (obviously, since Congress needed to pass this law) if you are an honest operation.

I know this seems weird– unlikely — a bother.  You would be merely correct.  If I were a crook, I am already on the wrong side of the law so perhaps I am not going to register.  Or do I register, figuring that I am hidden within the estimated 32,000,000 initial filings?

Hopefully the below link will connect you to my firm’s initial alert regarding these filings; and you can sign up for our alerts and then you will be updated as needed.  Email me if you are having trouble or have any current questions.

https://www.duanemorris.com/alerts/you_cant_hide_corporate_transparency_act_dragnet_0923.html

AI Driving Biotech Innovation

Today’s Boston Business Journal contains a fascinating article about a local company, Generate Biomedicines, now utilizing machine learning software to create “purpose-built proteins capable of performing any desired function” through protein sequencing.  Although this stated result may prove over-broad, the goal is to invent proteins addressing a wide spectrum of diseases.  They now have a Phase 1 running on a monoclonal COVID-19 antibody.

That kind of ambition (and performance) has attracted a partnership with Amgen and a just-concluded Series C raise of $273M.  AI-based medical initiatives not surprisingly can attract significant financial support, even at a relatively early stage.

Posted in AI

Near-Term in American Business: Predictions by CEOs

CEOs in the defense industry, hospitals and  real estate  identified the challenges before their businesses at a panel presentation today sponsored by National Association of Corporate Directors–New England.  (Disclosure: I am on Advisory Board of, and Secretary to, NACD–New England).

The speakers: Gregory Hayes, Chair and CEO of RTX (Raytheon and two other major defense/aeronautics manufactures with c $75B annual sales and 186,000 employees); Ann Kilbanski, CEO of MassGeneralBrigham; and Ben Breslau, chief research officer of JLL (world-wide real estate company engaged inter alia in planning and finance of major realty assets).

Points of general  agreement:

AI will impact everything at an accelerating pace and this time next year you will be surprised to learn this is a “today” issue: disruption of labor force, need for retraining, greater efficiencies impacting how things are manufactured, how manufactured products operate, what sort of real estate assets are needed (and not needed).

Labor force is a major player.  RTX lost c 24,000 works last year and hired more than 34,000; labor markets are fluid, the fall-out of COVID has reshaped needs for real estate and ability to hire for particular roles, senior managers are mobile and particularly difficult to find/hold, costs of living in major cities for labor (as well as companies) tends to drive people and businesses to lower-cost venues (the South, Iowa [one of RTX’s 3 major components is in Iowa]).

The elephant not in the room: no one used the “R” word.  When I reported back to my corporate department after the meeting that “recession” was not discussed, there was interest in this absence of mention, given the continuing though not universal drum-beat about this national and global risk.

Medical Care Highlights

The system of health care delivery in the US is broken.  We remain somewhat stuck in the financial model of “fee for service.” What we need is a structure that pays for a continuum of care that has better long-term outcomes: preventative services, care delivered promptly by Zoom or similar, clinics in areas away from a centralized hospital, support for delivery of care within the home itself.  Some types of care are already on that model: c90% of mental health care today is delivered on line.  Medicine needs to come to the patients.

 

Real Estate Highlights

This discussion touches issues as which  the anecdotal evidence is general knowledge..

While people are returning to the office, we do not know how far that trend will go.  (Per RTX, huge percentages of their people are remote [40%] and many are hybrid-lite, which of course impacts space needs.) In cities, new properties with amenities are much more popular than older buildings.  Older city buildings need to be repurposed, used for housing perhaps, but maybe demolished; non-use and high interest rates drive down property values to the value of the ground alone, so you demolish.  Interest rates will stabilize at some point and at some future time likely drop but specifics were unclear.  In a year, banks will clean up their finances and with rates stabilizing some values may come up.  Grade B, C and even A-minus city properties and malls anywhere are at risk and candidates for repurposing or demolition.

Aviation, Defense Contracting, Space (RTX lines of business)

Air traffic will increase radically in the next decade. (Still, 80% of Americans have never been on a plane (!).)  We lack pilots now, will get worse.  Federal Air Traffic Control is outdated and does not use modern technology.  Result is going to be very bad. Must use AI.  Need to install new ATC technology, a multi-year $10B project.  FTA was without a chair for three years until recently, which has been a delay.   Main reasons for flight delays: lack of pilots and ATC lacks AI technology, which is here today, and could easily reroute many delayed flights.  Planes of the future do not need pilots. Government does not know how to certify a self-flying aircraft so insists on pilots; passengers of course also wary.  One pilot on the ground can control many aircraft with AI.  Ideal on-board flight crew of the future: either no  human, or one human and a dog (the dog keeps the pilot from touching any of the controls).

War in Ukraine: RTX products shoot down long-range and near-range missiles/drones.  90%+ success with missiles, 98% success with short-range objects (<5 miles).  War is going to last 2-3 more years.

China: tension point is Taiwan which is 100 miles from China shore.  China has tech we do not have, super-high speed air weapons (a mile a second in some cases) that we do not have technology to intercept; working on it. Goal of some Chinese weapons: to take out US military capacity at Pearl Harbor before it can be deployed, in case China make a move, so what US then could do would be too late (sound familiar?).  This last section on China was something that neither I nor the folks with me had ever heard before.

These final two paragraphs affect supply chain design, onshoring and near-shoring (RTX has 14,000 discrete suppliers), but also have geopolitical ramifications that are unsettling in both  economic terms and personal (risk of major war) terms.

 

Board Composition in the Coming Years

This is the first, brief post based on this morning’s program at  Boston’s Seaport Hotel sponsored by National Association of Corporate Directors–New England.  (Disclosure: I am on the Advisory Board to, and Secretary of, NACD-New England).

At the end of a program during which three major business leaders opined as to the near-term future of aspects of American business, an audience question in substance was this: “Given all the changes and issues upcoming, what should a company be looking for when building out its board?”

Answer: an expert in “disruption.”  Given the numerous factors mentioned (AI, state of the world, upheaval in labor markets and in real estate,  interest risk, geopolitical threats), it would be best to add someone who can think about the future in terms of the unexpected.  Implicit in this advice is that current board members, from an inside vantage point and often with deep knowledge in only one or two areas (financial, industry, science, HR, geopolitics, whatever), may miss how multiple factors come together and create a problem not identified in any one information silo.

In any matrix for analysis of board needs that I have seen,  mentioning board gaps to be filled, while I have seen “risk” mentioned I have never seen a board need articulated almost in a sci-fi manner; the specific word used was someone who thought in “scenarios.”.  Without second-guessing a distinguished panel (everyone well above my pay grade, for sure!), I do wonder about how that putative board member would fit into the tone and congeniality that boards strive for in order to avoid needless diversions and frictions.

For anecdotal summary of several interesting highlights from the panel discussion, see next following post

SEC vs. NFT

The law relating to nonfungible tokens continues to limp along without clear guidance.  In the past, the SEC has taken the general position that if you sell an NFT based on the express or implied promise of making a profit by reselling it, you are selling a security, even if the form of the NFT is a drawing or work of art.  And the SEC has levied fines on celebrity endorsers of such offerings who receive undisclosed direct or indirect fees for advertising the merits of buying that NFT.

Without presuming to being able successfully to explain the complex law here, the whole legal issue is polluted (or informed, depends on how you see it) by a 1946  U. S. Supreme Court case (“Howey”) that held that selling interests in a physical orange grove was the sale of a security because the “investor” relied on the efforts of the seller to run the grove for a profit and share that profit with the investor.  If you rely for profit on the effort of a third party, whatever you call the investment vehicle it is still akin to buying a share of stock and relying on the corporation to run a business and share the profit with shareholders.

Particularly where the proceeds of the NFT are to be used by the issuing company to build an asset that creates a potential profit, the SEC has taken the position that this is the same as investing in a company in hopes that shares of stock can be resold at a profit.

In an August enforcement action, the SEC forced a $6 million settlement with a company issuing NFTs to fund building a media platform, and this month levied a million dollar fine against a company developing a show.

SEC Chair Gensler has been unwilling to release clear guidance to the marketplace as to what precisely the SEC consider a security, and such an SEC position is not unusual– the forms of deals (whether NFTs or other offerings) are varied and  fluid, and the SEC often has waited years to see what is happening in the marketplace before giving guidelines.  However, since many NFTs represent efforts to raise money for artistic projects, the question is whether the SEC’s lack of guidance will stifle artistic creativity.  The definitional issues here are beyond both the scope of this post and the scope of clear understanding.

Finally and predictably, the two-person Republican minority on the SEC has dissented with respect to such above-referenced enforcement efforts, claiming that “[r]ather than arbitrarily bringing enforcement actions…, we ought to lay out some clear guidelines for artists and other creators who want to experiment with NFTs as a way to support their creative efforts and build their fan communities.”