SEC Focus on FDA-Regulated Companies

The FDA has a Task Force with focus on companies subject to FDA regulation/clearances.  It should be remembered that while the SEC regulates, and monitors disclosure by, public companies, much of what they do also covers private enterprises which raise capital, including early-stage med-tech and bio companies.  The focus of SEC activity is to prevent misstatements and over-statements made to new investors or (for public companies) into the public marketplace.

Many investigations arise by reason of inaccuracies in materials utilized by emerging companies in private fund-raising, whether for stock offerings or other “securities” such as SAFEs and Convertible Notes.  These misstatements can arise in connection with oral presentations, offering “decks,” private placement memoranda or other written material. This includes articles or studies prepared by third parties, based on misstatements or exaggerations made by companies and which are repeated by third party sources.  Companies offering securities of any sort need to centralize and review disclosures of all types by all personnel; which  includes scientists and researchers who may not be attuned to the impact and scope of the laws concerning sale of securities.

The types of typical misstatements often relate to a misstatement of the science, the status and success of trials, the existence and scope of customer base, failure to clarify that certain “sales” are really beta tests, and particularly ambiguous or misleading statements concerning company status vis-a-vis FDA guidance or approval.

While one might expect that most complaints come from ultimately dissatisfied investors and arise many months (or years) later, overly enthused senior management should note that very often the SEC is “tipped” by inside whistleblowers who are uncomfortable with what is being said about the company offerings which seem excessive to scientific employees who are finely attuned to the granular accuracy of data concerning the science.

Having company disclosure substantively reviewed by outside counsel prior to its use is an excellent way, absent actual intent to defraud, to protect against an SEC criminal action based on material misstatement; the SEC may well take civil action and also cause corrective disclosure or a rescission offer to investors, but such is far more palatable than a criminal charge.

Finally, and this relates to public companies, the SEC focuses on trading upon material non-public information, whether through formal trading programs (so-called 10b5-1 plans or otherwise), and trading in shares of companies other than one’s own company based on insights obtained (by example) by reason of inter-company collaboration or trade group disclosures.

With so much funding going into med-tech of all sizes, it is not surprising that the SEC has a specific Task Force in this vertical, and companies should be aware of the particular SEC focus  which does not make different rules for med-tech but does create heightened scrutiny.

GAI Users Speak Up

Current press coverage is full of articles and columns extolling the advantages of AI in business.  These articles form a counterpoint to the scare risk scenarios which grabbed the early headlines.  The gist is that at least in its current form, GAI can speed reasonably simple but time-consuming tasks.  Articles have appeared very recently in the NYTimes, Boston Globe and in the current issue of Boston Business Journal.

The BBJ coverage is revealing, in making an excellent case for the controlled utilization of ChatGPT on the part of founders of early-stage companies.  The highlighted entrepreneurs, all in their mid-thirties, claim spectacular time savings in such tasks as: analyzing and comparing big-company franchise agreements against other franchise agreements to identify differences for clients; designing sales materials for digital platforms; building presentations; using Chatbox as a customer, practicing interactions.

Surely these are modest undertakings by modest enterprises, and surely large enterprises are or will be more intense users and at higher levels of output.  It is unclear whether the fear of GAI ‘hallucinations” (inventions of facts) is of concern in applications such as suggested here.  It seems that today’s folks are relying on either comparing extant texts or phrasing or modeling given data (eg, inputting clear instructions and asking GAI to phrase, package, prepare for posting the facts/ideas given).

Then there was the article I read over the weekend, not sure where, about a  person who had chatbox write a get-well-soon poem to someone who was ailing.  Seems it was so literate and uplifting that the writer now has  used chatbox for many personal messages.  Perhaps the folks who write Hallmark cards should begin to worry…

And as to the last thought, given the nature of modern poetry, where fact and fiction free-associate without necessarily making a linear narrative, it sounds like chatbox may be the next Walt Whitman.  As someone who has published five books of poetry and sits on the board of the New England Poetry Club, I personally have, indeed, begun to worry that one day we will give an annual poetry award (we grant several) to a circuit board.

Generative AI Marches On

Notwithstanding the call for slowing down GAI until controls can be applied, businesses are charging ahead, according to today’s NYTimes.  (I hate to say it, but per my immediately prior post, was the writer correct that the drive for profit trumps all other considerations?)

Major companies (Oracle, Salesforce, AT&T, Amazon) are providing GAI business products that: help engineers produce new code; create sales material and product descriptions for marketing; answer employee questions; summarize meeting notes and lengthy documents.

Further, Gartner reports over half of business customer-users have no internal policy on GAI usage, which is unsettling (I note a very small sample size, however.)

Meanwhile, nothing is heard from Congress, and precious little beyond passing mention from the White House. In an earlier post, I noted that the tendency in the US, when it comes to new tech, is to let it advance unencumbered for some time, then step back and evaluate risk and need for regulation based on what has actually happened “on the ground.”  This reflects the desire to foster the entrepreneurial spirit and to keep the US ahead of the world in new technologies.  While I have no direct line to either the Congress or President Biden (who falls into the vast category of people who have never bought me a beer), it seems that our government is going to let the AGI beast run wild for a bit longer, notwithstanding the various warnings voiced by the gamekeepers.

Posted in AI

GAI and Neoliberalism

You need to bear with me on this long post; skip it if you are not interested in a completely different analysis of GAI risk.

It has been a couple of weeks since I posted on GAI, during which time the public discourse has rehashed prior issues; a cover story in The Economist, a publication usually with something to add, managed to find no new ground.  Patient readers of the Sunday New York Times, however, might have come across an interesting new perspective buried on page 6 of the Opinion section: an economic critique of the risk of GAI from the neoliberal policy makers who allegedly actually control our lives.

Neoliberalism is described as free market capitalism, its  attributes including free trade and  deregulation.  Simply put by the writer — indeed simplistically put by the writer — neoliberalism says that private enterprise is more efficient than government and the economy should be left to market dynamics.

But, it is asserted by this author, all the market does is maximize profit and all decisions ultimately lead to increased prices for all goods and services, and not to the promised economies and the  promised improvement of the life of our society. The market is the quick fix for the symptoms of our problems: lack of social equality and justice.  What is needed is solution to the problems themselves.

Without now engaging the inherent policy analysis, what the devil has this to do with the GAI risk?

GAI is the same con as, for example,  Theranos and Uber: each promised solutions to fundamental problems (public health and urban transportation, two matters ill-served by governments and regulated markets). (Also chastised: Tesla, TaskRabbit, Airbnb, Soylent and Facebook.)

After the “charming” tech innovation hits the marketplace, there always is “the ugly retrenchment” where the customers and government must shoulder the costs of making that innovation profitable.  This result is inevitable because at the start  “[a] always, Silicon Valley mavens play down the Market’s role.”  The description by uber-investor Marc Andreessen that AI “is owned by people and controlled by people, like any other technology” is described as an “exquisite euphemism.” The writer’s theme is that this neoliberal world view reframes social problems in light of for-profit tech solutions.

Whether you consider the author a communist or a social dreamer, or a prophet, is up to you the reader.  But the author’s message is that the market will in fact cause AGI to be developed for profit, and that AGI’s major risk is not political slavery, not robots shooting people, but rather the conning of the general population to the enrichment of the people who even now, assures the author, are pushing the development of AGI not for the public good but for market profit.

“A.G.I.-ism” is the clone of neoliberalism.  Companies need profit, having raised billions from investors and needing return on investment.  The company Open AI is  contemplating raising “another $100 billion to build A.G.I.” even as its CEO seems to be courting government control of the putative risks of war and dictatorship–but not the risk of the marketplace itself.

 

Will SEC Make Capital Formation for Small Business Easier?

The SEC has a Small Business Capital Formation Advisory Committee (“Committee”)  to propose regulation that would make it easier for small business to raise investment capital.  A recent survey found that 89% of small businesses feel capital-limited, but only 6% of small business sought equity investment to meet that shortfall.

With such an unfulfilled need in the small business community, a bedrock of American capitalism and a highly favored cohort in the view of the general public, you would think that the SEC would have taken major steps  to ease capital formation for small business.  But it ain’t so!  Materially similar rules apply to small business capital raises as to larger VC-type raises.  One recent effort to afford easier access to capital has been the SEC system for crowd-funding, and while deals have been done, compliance with that method is formal and requires legal guidance (or the surrender of the small busienss to the crowdfunding platforms established under the SEC Crowdfunding Regulation (with related risk, expense and formalities).

This week, at a meeting of the Committee, two SEC commissioners spoke about the need to make capital formation easier for “underrepresentated” entrepreneurs. They were focusing upon entrepreneurs in rural areas  and others less sophisticated, both without access to knowledgeable lawyers.

But no specific proposals were suggested; that is the task of the Committee.  Will there be success where in the past the capital formation process has remained technical and requires, often, what one commissioner described as “a game of  ‘gotcha’ that requires $1,000 per hour lawyers to navigate”?

Making progress here will be difficult.  The problem fundamentally is the risk of fraud.  Scams abound, we read about them all the time whether through the internet or telephone or in person.  By definition, underrepresented founders are not likely to be attuned to legal formalities which are, indeed, sometimes arcane.  And the fundamental task of the SEC is to prevent fraud, not to engineer social policy for economic justice and growth.  Mechanisms to reach these two goals, as a practical matter, seem inherently contradictory.

The SEC website has posted advice addressing methods of capital formation but it is not enough; I suspect that protection against fraud does require legal assistance (unless AI can be trained to uncover fraud, which I would not count on).  The answer, however impractical it may be, is that the Congress can create an independent agency of lawyers to represent the underrepresented entrepreneur in compliance with SEC (and indeed State) securities regulations, with fees based on ability to pay and upon success.  We have the SBA for general business advice and funding, and free lawyers for the criminally indigent.  I hate to suggest it, but we could use government lawyers to help fund small business through legally compliant capital investment.

Otherwise, I fear that this is a real issue that will not be solved– as it has not been since the SEC was formed almost a century ago….

 

Details of the US Corporate Transparency Act

A couple of years ago, Congress found time to agree on a wildly intrusive law designed to prevent tax evasion and other illegality, the Corporate Transparency Act.  All business BELOW a certain size will be required, starting next January 1, to register with the US Treasury’s Financial Crimes Enforcement Network (FinCEN) and to provide detailed information about their companies and beneficial owners, in order to guard against tax scams and criminal activity.  Failure results in fines and jail time.

For starters, this is a law that affects small business; large businesses are exempt if public or if have 20 employees and more than $5M in gross receipts and a US office.  As a practical matter, a large enterprise will have very many owners and they will keep changing ownership.

So, if you are a small business (required to make a government filing to form your entity, such as setting up certain trusts, forming a corporation, forming a limited partnership LLP or forming an LLC), then during 2024 (whether your business was existing at the start of the year or formed thereafter) you must file with your government:

FOR YOUR BUSINESS: name, trade name and dba, address, jurisdiction of formation (if foreign company first site of US registration), and taxpayer ID number.

FOR EACH INDIVIDUAL OWNER: legal name, date of birth, address (usually home), an identifying number from a license or passport or similar document with an image of that document.

There is more if you form an entity after January 1,2024, but this is enough for now.   Better call your CPA for help.   (And, I sure do hope the portal for receiving this information is secure!)

There are all sorts of legal issues we won’t go into here, and numerous loopholes. For example, and this is NOT legal advice (nothing in any post here is ever legal advice) and as there seem to be no regulations, forms and instruction: are individual entrepreneurs exempt, are common law partnerships (unlike limited liability partnerships, they may well not need to file anything to be formed) exempt, will certain “owners” just become consultants or lenders to avoid having to file while in fact contracting to receive substantial financial rewards akin to an equity interest, what if investors form a group to invest which group need not report such as a simple partnership–just to hit a few thoughts that come to mind even from this ethical attorney posting this blog.

At a time when there is pressure to control large companies, at a time when efforts are being made to protect information of individuals, at a time where many in our body politic on both side of the aisle are fearful of government restricting personal liberties, does this regime make sense?  Is this intrusion, or just the gathering of otherwise government-known but dispersed information efficiently, so as to protect the US tax base and thus benefit all the honest taxpayers who will comply with this new law?

Resolution of the questions in these last two paragraphs are above my pay grade. If you have the answers, or have expertise in these areas, please reply to this post; legitimate dialog may be posted back out.

 

 

 

European Union Enacts Law Limiting AI

As previously posted, the EU has been entertaining legislation of wide scope to control AI in numerous ways; this past Wednesday they enacted the first part of their proposed regulatory scheme.  Seems only China is ahead of the EU, and that the US lags far behind, in legislating AI controls.

Major points of the currently enacted EU law:

1.Live facial recognition software is banned.  (Query whether exemptions will follow for national security or law enforcement applications.)

2. Generative AI must be accompanied by summaries of copywritten materials used for training the system, and must be subject to controls to prevent illegal contact.

3. Critical AI systems must be subjected to risk testing if related to critical functions: infrastructure such as water and energy, legal system, access to government services and benefits.

4. Developers cannot “scrape” biometric data from social media in building their database.

Of course, such regulations will be complied with (one hopes) by  developers in commercial space and no doubt will avoid a wide variety of problems arising by accident or mis-design.  Criminal elements or enemy governments will not be checking the European Parliament code books for guidance, however.

GAI and Jobs– the Debate

No one knows if AI will destroy more jobs than it creates, render less educated or less intelligent people unemployed, drive innovation that creates new classes of jobs, or ends up creating wealth so vast that supporting the unemployed will be an easily solved blip on the world economy.  Indeed, with the whole world scheduled to lose population to a significant degree by 2100 (see a recent exhaustive study of this in the Economist), fewer jobs may not be a big problem and the young will be employed caring both economically and physically for the increasing cadre of older people preserved by modern science.  (Assuming we are not all fried by global warming, in which case you need not read the balance of this post.)

AI creates tools that solve methods of achieving a task.  It does not necessarily replace the job that requires the performance of that task.  Perhaps one approach would be to regulate AI so it produces task-performance tools only; could the government require that each class of significant AI advance make jobs easier to perform but not be projected to reduce total employment to below the size of the relevant job pool?  (I am not sure if this is Communism, but it surely isn’t modern capitalism, so this solution would require a radically new social compact.)  Could this lead to reduced work weeks along with a shrinking employment pool of young people?

History has taught us that technology can cause dislocations harmful to many workers for somewhat lengthy periods of time, but the industrial economy overall has survived, short of total collapse of human society.  However, history is not a guarantee, just a frame of analysis.  Should we be encouraged by the fundamentally social nature of homo sapiens, that a machine can give you the answer but you need a human being to tell you, very often, what the machine said in order to create peace of mind and confidence and social bonding with ancillary benefits?

Or will we become as a total population machine-friendly, and at what cost?  Could being machine-friendly mean that we cease caring about useful employment as part of human DNA?  This latter result would be a stunning readjustment of who we are, requiring such a long evolutionary cycle that those of us alive today are generations, likely millennia, away from having to think about that result.

 

Posted in AI

AI–Where Has the Dialog Settled?

First, you know a novel topic has been extensively rehashed when the Sunday New York Times does a summary “think piece” about prior commentary (yesterday).  Could it be that generative AI, red hot for a few weeks, has become not so much cool as cold?

I suggest we are in a pause, where government is processing the information that will be precursor to regulation, and industry is wondering what shoe will next drop.  I doubt software developers have stopped working on the next generation of AI. Interesting to think about how regulation might evolve, where the Federal government is pro-regulatory when it comes to business but the Republicans in Congress are anti-regulation.  Of course, the Republicans also seem intent on reigning in Big Tech, so this could finally be something that the Biden administration and the entire Congress could agree upon.

Although I am not a technical expert by any means, I am also intrigued by a system like AutoGPT which (I am stealing here from the Times) generates its own programs, creates new applications, improves itself and thus can become rogue.  These systems as of today are not robust but progress is quick these days, what with people and machines working together; alarmists are alarmed, and seems to me that skeptics who say that machines can never be an existential threat better be correct. Risk is the multiple of probability and potential impact, and the risk here better really be zero– a hard conclusion about which to have confidence.

The Times puts a neat focus on the potential issue of  a logical computer being wholly logical: a criminal tells a computer to “make some money,” and the results are bank thefts, a revolution in a country where the criminal holds oil futures, and a machine that replicates itself when someone tries to turn it off.  This final thought brought us Skynet, but also suggests Immanuel Kant might be intrigued and from the grave issue yet another Critique of Pure Reason, a study of the thought processes of wholly rational machines.  Perhaps he was 250 years too early.

I close with remembering the Times reporter several weeks ago who interviewed a chatbox which– or was it who?– concluded that it and the reporter were in love and that the reporter hated his wife. Reminiscent of the decade-old movie Her, in which a distraught man fell in love with a computer, only to be emotionally destroyed by the computer telling him that it  had on-line relationships with millions of men.  Only difference is, in the 2023 Times conversation the computer must have been more evolved than the computer of the 2013 film, as it seemed the 2023 computer had mastered the subtle art of human love.

Posted in AI

Guardrails for Companies to Avoid GAI Liability

Business AI can be reflected in pubic advertising in any media or form, targeted email sent to selected individuals, telephone solicitations seeking customers with interactive AI conversations, images used to represent actual events or products?

A previous post (“AI and Company Boards” dated May 18) advises what the Board ought to ask of management.  What does management do, nuts and bolts and on the ground, to fulfill the Board mandate to obey the law and just “don’t mess up”?

What specific steps can help prevent errors which mislead the customer, overstate their product capabilities, avoid unfair trade practices, and avoid the accusation that you have slandered a person or a competitor?  The answers are derivative of prior posts identifying risk: attend to the nature of he AI you use and design and monitor internal systems that police the generation and content of your AI-assisted or created output.

First, recognize the issues and allocate resources, money and people, to undertake a preventative program. Like any important risk management function, it needs to be owned by someone in management with authority to demand attention and adherence.  Like any important risk, it needs to be on the ERM (enterprise risk management) checklist for each department or function that involves GAI.  It needs to have a direct report up the line to someone who understands the task.

The legal department needs to generate checklists in two directions: upstream as to what GAI is being used, and downstream as to the content generated by that GAI.  Minimum items on checklist:

*Criteria for selection of AI used– screened for internal bias; claims asserted against users; compliance with State and Federal laws confirmed; can it be programmed to collect and store only such data is is central to the business of the company and to exclude the harvesting of information  that is ancillary.

*Handling of use of AI internally–are people working on use of AI properly trained as to risks; are they carefully limiting what data in fact is being harvested; are they trained not to put into the system either company-proprietary information of personal information; have experts addressed non-hacking protection of the AI operation; has management reported with granularity to the board committee responsible for ERM as to company effort in this regard; has inside our outside counsel been kept current so that counsel can in turn advise the company of relevant new law, regulations a court decisions; installation of system to analyze film and photos for AI alteration or generation;  has HR been alerted as to lay-offs, company morale, retraining, job satisfaction, etc.

*Output: who reviews how often output, whatever its form (ads, text, website, product/service literature, press releases, text of verbal programs); prompt reporting of problems, errors etc to legal; avoidance procedure re violation of copyright, trade name, copyright laws.

I suspect that as regulation increases and as GAI issues become fully recognized and fully utilized, outside service entities will arise offering specific and / or comprehensive assistance with respect to the foregoing; this triggers the usual business question: is it cost effective for our company to build this in -house or hire it in?  In turn, the question arises as to the quality of, and contractual obligations and exclusions of, any outside firm