GAI and Neoliberalism

You need to bear with me on this long post; skip it if you are not interested in a completely different analysis of GAI risk.

It has been a couple of weeks since I posted on GAI, during which time the public discourse has rehashed prior issues; a cover story in The Economist, a publication usually with something to add, managed to find no new ground.  Patient readers of the Sunday New York Times, however, might have come across an interesting new perspective buried on page 6 of the Opinion section: an economic critique of the risk of GAI from the neoliberal policy makers who allegedly actually control our lives.

Neoliberalism is described as free market capitalism, its  attributes including free trade and  deregulation.  Simply put by the writer — indeed simplistically put by the writer — neoliberalism says that private enterprise is more efficient than government and the economy should be left to market dynamics.

But, it is asserted by this author, all the market does is maximize profit and all decisions ultimately lead to increased prices for all goods and services, and not to the promised economies and the  promised improvement of the life of our society. The market is the quick fix for the symptoms of our problems: lack of social equality and justice.  What is needed is solution to the problems themselves.

Without now engaging the inherent policy analysis, what the devil has this to do with the GAI risk?

GAI is the same con as, for example,  Theranos and Uber: each promised solutions to fundamental problems (public health and urban transportation, two matters ill-served by governments and regulated markets). (Also chastised: Tesla, TaskRabbit, Airbnb, Soylent and Facebook.)

After the “charming” tech innovation hits the marketplace, there always is “the ugly retrenchment” where the customers and government must shoulder the costs of making that innovation profitable.  This result is inevitable because at the start  “[a] always, Silicon Valley mavens play down the Market’s role.”  The description by uber-investor Marc Andreessen that AI “is owned by people and controlled by people, like any other technology” is described as an “exquisite euphemism.” The writer’s theme is that this neoliberal world view reframes social problems in light of for-profit tech solutions.

Whether you consider the author a communist or a social dreamer, or a prophet, is up to you the reader.  But the author’s message is that the market will in fact cause AGI to be developed for profit, and that AGI’s major risk is not political slavery, not robots shooting people, but rather the conning of the general population to the enrichment of the people who even now, assures the author, are pushing the development of AGI not for the public good but for market profit.

“A.G.I.-ism” is the clone of neoliberalism.  Companies need profit, having raised billions from investors and needing return on investment.  The company Open AI is  contemplating raising “another $100 billion to build A.G.I.” even as its CEO seems to be courting government control of the putative risks of war and dictatorship–but not the risk of the marketplace itself.

 

Will SEC Make Capital Formation for Small Business Easier?

The SEC has a Small Business Capital Formation Advisory Committee (“Committee”)  to propose regulation that would make it easier for small business to raise investment capital.  A recent survey found that 89% of small businesses feel capital-limited, but only 6% of small business sought equity investment to meet that shortfall.

With such an unfulfilled need in the small business community, a bedrock of American capitalism and a highly favored cohort in the view of the general public, you would think that the SEC would have taken major steps  to ease capital formation for small business.  But it ain’t so!  Materially similar rules apply to small business capital raises as to larger VC-type raises.  One recent effort to afford easier access to capital has been the SEC system for crowd-funding, and while deals have been done, compliance with that method is formal and requires legal guidance (or the surrender of the small busienss to the crowdfunding platforms established under the SEC Crowdfunding Regulation (with related risk, expense and formalities).

This week, at a meeting of the Committee, two SEC commissioners spoke about the need to make capital formation easier for “underrepresentated” entrepreneurs. They were focusing upon entrepreneurs in rural areas  and others less sophisticated, both without access to knowledgeable lawyers.

But no specific proposals were suggested; that is the task of the Committee.  Will there be success where in the past the capital formation process has remained technical and requires, often, what one commissioner described as “a game of  ‘gotcha’ that requires $1,000 per hour lawyers to navigate”?

Making progress here will be difficult.  The problem fundamentally is the risk of fraud.  Scams abound, we read about them all the time whether through the internet or telephone or in person.  By definition, underrepresented founders are not likely to be attuned to legal formalities which are, indeed, sometimes arcane.  And the fundamental task of the SEC is to prevent fraud, not to engineer social policy for economic justice and growth.  Mechanisms to reach these two goals, as a practical matter, seem inherently contradictory.

The SEC website has posted advice addressing methods of capital formation but it is not enough; I suspect that protection against fraud does require legal assistance (unless AI can be trained to uncover fraud, which I would not count on).  The answer, however impractical it may be, is that the Congress can create an independent agency of lawyers to represent the underrepresented entrepreneur in compliance with SEC (and indeed State) securities regulations, with fees based on ability to pay and upon success.  We have the SBA for general business advice and funding, and free lawyers for the criminally indigent.  I hate to suggest it, but we could use government lawyers to help fund small business through legally compliant capital investment.

Otherwise, I fear that this is a real issue that will not be solved– as it has not been since the SEC was formed almost a century ago….

 

Details of the US Corporate Transparency Act

A couple of years ago, Congress found time to agree on a wildly intrusive law designed to prevent tax evasion and other illegality, the Corporate Transparency Act.  All business BELOW a certain size will be required, starting next January 1, to register with the US Treasury’s Financial Crimes Enforcement Network (FinCEN) and to provide detailed information about their companies and beneficial owners, in order to guard against tax scams and criminal activity.  Failure results in fines and jail time.

For starters, this is a law that affects small business; large businesses are exempt if public or if have 20 employees and more than $5M in gross receipts and a US office.  As a practical matter, a large enterprise will have very many owners and they will keep changing ownership.

So, if you are a small business (required to make a government filing to form your entity, such as setting up certain trusts, forming a corporation, forming a limited partnership LLP or forming an LLC), then during 2024 (whether your business was existing at the start of the year or formed thereafter) you must file with your government:

FOR YOUR BUSINESS: name, trade name and dba, address, jurisdiction of formation (if foreign company first site of US registration), and taxpayer ID number.

FOR EACH INDIVIDUAL OWNER: legal name, date of birth, address (usually home), an identifying number from a license or passport or similar document with an image of that document.

There is more if you form an entity after January 1,2024, but this is enough for now.   Better call your CPA for help.   (And, I sure do hope the portal for receiving this information is secure!)

There are all sorts of legal issues we won’t go into here, and numerous loopholes. For example, and this is NOT legal advice (nothing in any post here is ever legal advice) and as there seem to be no regulations, forms and instruction: are individual entrepreneurs exempt, are common law partnerships (unlike limited liability partnerships, they may well not need to file anything to be formed) exempt, will certain “owners” just become consultants or lenders to avoid having to file while in fact contracting to receive substantial financial rewards akin to an equity interest, what if investors form a group to invest which group need not report such as a simple partnership–just to hit a few thoughts that come to mind even from this ethical attorney posting this blog.

At a time when there is pressure to control large companies, at a time when efforts are being made to protect information of individuals, at a time where many in our body politic on both side of the aisle are fearful of government restricting personal liberties, does this regime make sense?  Is this intrusion, or just the gathering of otherwise government-known but dispersed information efficiently, so as to protect the US tax base and thus benefit all the honest taxpayers who will comply with this new law?

Resolution of the questions in these last two paragraphs are above my pay grade. If you have the answers, or have expertise in these areas, please reply to this post; legitimate dialog may be posted back out.

 

 

 

European Union Enacts Law Limiting AI

As previously posted, the EU has been entertaining legislation of wide scope to control AI in numerous ways; this past Wednesday they enacted the first part of their proposed regulatory scheme.  Seems only China is ahead of the EU, and that the US lags far behind, in legislating AI controls.

Major points of the currently enacted EU law:

1.Live facial recognition software is banned.  (Query whether exemptions will follow for national security or law enforcement applications.)

2. Generative AI must be accompanied by summaries of copywritten materials used for training the system, and must be subject to controls to prevent illegal contact.

3. Critical AI systems must be subjected to risk testing if related to critical functions: infrastructure such as water and energy, legal system, access to government services and benefits.

4. Developers cannot “scrape” biometric data from social media in building their database.

Of course, such regulations will be complied with (one hopes) by  developers in commercial space and no doubt will avoid a wide variety of problems arising by accident or mis-design.  Criminal elements or enemy governments will not be checking the European Parliament code books for guidance, however.

GAI and Jobs– the Debate

No one knows if AI will destroy more jobs than it creates, render less educated or less intelligent people unemployed, drive innovation that creates new classes of jobs, or ends up creating wealth so vast that supporting the unemployed will be an easily solved blip on the world economy.  Indeed, with the whole world scheduled to lose population to a significant degree by 2100 (see a recent exhaustive study of this in the Economist), fewer jobs may not be a big problem and the young will be employed caring both economically and physically for the increasing cadre of older people preserved by modern science.  (Assuming we are not all fried by global warming, in which case you need not read the balance of this post.)

AI creates tools that solve methods of achieving a task.  It does not necessarily replace the job that requires the performance of that task.  Perhaps one approach would be to regulate AI so it produces task-performance tools only; could the government require that each class of significant AI advance make jobs easier to perform but not be projected to reduce total employment to below the size of the relevant job pool?  (I am not sure if this is Communism, but it surely isn’t modern capitalism, so this solution would require a radically new social compact.)  Could this lead to reduced work weeks along with a shrinking employment pool of young people?

History has taught us that technology can cause dislocations harmful to many workers for somewhat lengthy periods of time, but the industrial economy overall has survived, short of total collapse of human society.  However, history is not a guarantee, just a frame of analysis.  Should we be encouraged by the fundamentally social nature of homo sapiens, that a machine can give you the answer but you need a human being to tell you, very often, what the machine said in order to create peace of mind and confidence and social bonding with ancillary benefits?

Or will we become as a total population machine-friendly, and at what cost?  Could being machine-friendly mean that we cease caring about useful employment as part of human DNA?  This latter result would be a stunning readjustment of who we are, requiring such a long evolutionary cycle that those of us alive today are generations, likely millennia, away from having to think about that result.

 

Posted in AI

AI–Where Has the Dialog Settled?

First, you know a novel topic has been extensively rehashed when the Sunday New York Times does a summary “think piece” about prior commentary (yesterday).  Could it be that generative AI, red hot for a few weeks, has become not so much cool as cold?

I suggest we are in a pause, where government is processing the information that will be precursor to regulation, and industry is wondering what shoe will next drop.  I doubt software developers have stopped working on the next generation of AI. Interesting to think about how regulation might evolve, where the Federal government is pro-regulatory when it comes to business but the Republicans in Congress are anti-regulation.  Of course, the Republicans also seem intent on reigning in Big Tech, so this could finally be something that the Biden administration and the entire Congress could agree upon.

Although I am not a technical expert by any means, I am also intrigued by a system like AutoGPT which (I am stealing here from the Times) generates its own programs, creates new applications, improves itself and thus can become rogue.  These systems as of today are not robust but progress is quick these days, what with people and machines working together; alarmists are alarmed, and seems to me that skeptics who say that machines can never be an existential threat better be correct. Risk is the multiple of probability and potential impact, and the risk here better really be zero– a hard conclusion about which to have confidence.

The Times puts a neat focus on the potential issue of  a logical computer being wholly logical: a criminal tells a computer to “make some money,” and the results are bank thefts, a revolution in a country where the criminal holds oil futures, and a machine that replicates itself when someone tries to turn it off.  This final thought brought us Skynet, but also suggests Immanuel Kant might be intrigued and from the grave issue yet another Critique of Pure Reason, a study of the thought processes of wholly rational machines.  Perhaps he was 250 years too early.

I close with remembering the Times reporter several weeks ago who interviewed a chatbox which– or was it who?– concluded that it and the reporter were in love and that the reporter hated his wife. Reminiscent of the decade-old movie Her, in which a distraught man fell in love with a computer, only to be emotionally destroyed by the computer telling him that it  had on-line relationships with millions of men.  Only difference is, in the 2023 Times conversation the computer must have been more evolved than the computer of the 2013 film, as it seemed the 2023 computer had mastered the subtle art of human love.

Posted in AI

Guardrails for Companies to Avoid GAI Liability

Business AI can be reflected in pubic advertising in any media or form, targeted email sent to selected individuals, telephone solicitations seeking customers with interactive AI conversations, images used to represent actual events or products?

A previous post (“AI and Company Boards” dated May 18) advises what the Board ought to ask of management.  What does management do, nuts and bolts and on the ground, to fulfill the Board mandate to obey the law and just “don’t mess up”?

What specific steps can help prevent errors which mislead the customer, overstate their product capabilities, avoid unfair trade practices, and avoid the accusation that you have slandered a person or a competitor?  The answers are derivative of prior posts identifying risk: attend to the nature of he AI you use and design and monitor internal systems that police the generation and content of your AI-assisted or created output.

First, recognize the issues and allocate resources, money and people, to undertake a preventative program. Like any important risk management function, it needs to be owned by someone in management with authority to demand attention and adherence.  Like any important risk, it needs to be on the ERM (enterprise risk management) checklist for each department or function that involves GAI.  It needs to have a direct report up the line to someone who understands the task.

The legal department needs to generate checklists in two directions: upstream as to what GAI is being used, and downstream as to the content generated by that GAI.  Minimum items on checklist:

*Criteria for selection of AI used– screened for internal bias; claims asserted against users; compliance with State and Federal laws confirmed; can it be programmed to collect and store only such data is is central to the business of the company and to exclude the harvesting of information  that is ancillary.

*Handling of use of AI internally–are people working on use of AI properly trained as to risks; are they carefully limiting what data in fact is being harvested; are they trained not to put into the system either company-proprietary information of personal information; have experts addressed non-hacking protection of the AI operation; has management reported with granularity to the board committee responsible for ERM as to company effort in this regard; has inside our outside counsel been kept current so that counsel can in turn advise the company of relevant new law, regulations a court decisions; installation of system to analyze film and photos for AI alteration or generation;  has HR been alerted as to lay-offs, company morale, retraining, job satisfaction, etc.

*Output: who reviews how often output, whatever its form (ads, text, website, product/service literature, press releases, text of verbal programs); prompt reporting of problems, errors etc to legal; avoidance procedure re violation of copyright, trade name, copyright laws.

I suspect that as regulation increases and as GAI issues become fully recognized and fully utilized, outside service entities will arise offering specific and / or comprehensive assistance with respect to the foregoing; this triggers the usual business question: is it cost effective for our company to build this in -house or hire it in?  In turn, the question arises as to the quality of, and contractual obligations and exclusions of, any outside firm

 

The Economist Magazine on GAI

Generative AI is now all over the press, given growing concerns of AI professionals and government, particularly now that the there has been  time to think through various ramifications.  It is not my purpose to aggregate what everyone else is writing, but Economist is a respected, non-US-based news source which does deep dives into complex topics, and the below discusses some Economist analyses.

CHINA:   GAI capabilities  are controlled by the government, and since GAI invents “lies,” it is possible that it could thus  be telling the truth within China by countering government “truth.”  The government requires AI to be “objective” in its training data (as defined by the government) and the generated output must be “true and accurate,”  but Economist (4-22) speculates that strict adherence  to government regulation would “all but halt development of generative AI in China.”  Thus tight enforcement is not anticipated.

JUNIOR JOB ELIMINATION: In the same issue, Economist speculates that potential net elimination of jobs may be overstated, but in fact the lower tier of jobs is at risk.  There always will be need for senior people, to both police AI against  hallucinations and to perform senior work that presumably AI will not be able to replace.  In what I see as an analogy to the observation that COVID in the long run would deteriorate the training of junior people who were physically not on site, the Economist speculated that material reduction in junior employees might interfere with creating the next generation of senior managers.

NET EFFECT ON PROFIT: The May 13 issue contained speculation about the likely impact of AI on  AI industry and general profitability. Citing a recent Goldman Sachs article which assumed every office worker in the world utilized AI to some (stated)  extent, that could add about $430 Billion to annual global enterprise-software revenues, mostly in the US. Sounds like a lot (and it is a lot as a discrete number) but in the US pre-tax total corporate profit as a percentage of GDP would increase from today’s 12% to only 14%.  There also is little chance that a single company will hold a monopoly position in AI, with many competitors and overlapping capability. Sounds like a complex analysis for investment advisors (wonder if they will revert to AI for assistance…).

JOB IMPACT: I have kept a list of articles from Economist and elsewhere about which verticals  will lose most jobs.  Candidates include accountancy, law (though not at the lawyer level), travel agencies, teaching (particularly a foreign language, something I find unlikely as it take tremendous effort to master a foreign language without personal contact, encouragement and pressure), geographers.  And predictions are tricky: in 2013 Oxford University estimated that automation could wipe out 47% of American jobs over the next decade, BUT in fact the rich-world unemployment rate was cut in half over that period. Per Economist 5-13: “[H]istory suggests job destruction happens far more slowly.  The automated telephone switching system –a replacement for human operators — was invent in 1892.  It took until 1921 for the Bell System to install their first fully automated office.”  By 1950 the number of human operators reached its height, and people were substantially eliminated until the 1980s.  And 20% of rich world GDP is construction and farming; few computers can nail a 2×4 or pull turnips from the ground.

ABOUT LAWYERS: Economist quotes an expert in a Boston-based law firm as predicting that the number of lawyers will multiply.  AI contracts now can draft to “the 1,000  most likely edge cases in the first draft and then the parties will argue over it for weeks.”  This strikes me as perhaps unlikely, as I suspect law will move in the direction set by the National Venture Capital Association by standardizing contract forms for the notoriously wide-open business of venture finance, covering the important stuff in language that everyone begrudgingly accepts as necessary to create marketplace efficiency.  But no doubt big changes are coming.  As a lawyer myself, looking at the future of the legal profession in the age of AI, I am reminded of a song sung by Maurice Chevalier in the movie “Gigi”:  “I’m so glad/ I’m not young/anymore.”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI Industry Endorses the Movie “Terminator”

No, really.

Well, sort of, according to today’s on-line New York Times.

In “Terminator” the computer system “Skynet” became self-aware and decided humans were a nuisance so it started to eradicate them.  This plot line about the risk of Generative AI was viewed by many has an hysterical over-simplification of an impossible problem: we write the code, and the code says “no, don’t do that.”

I now quote verbatim the headline from the Times:

” ‘A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.”

The short text in this warning letter includes the following sentence: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war…”

Surely, you will say something like, “well, some minor programmers who watch too much TV are musing, letting their fantasies control their heads….” You would be wrong.

This letter was signed by over 350 executives and AI professionals, including the President of OpenAI, the CEO of Google Deepmind, and the CEO of Anthropic.  And by two of the three dudes who won the Turing Award for helping invent this stuff.

In the movie, Sarah Connor learned from Arnold Schwarzenegger (of all people!)  the reality of destruction by AI, and she was put into a mental hospital for spouting such fears.  The way this AI thing is developing for us today, there will not be enough hospital beds in the world to house all  the fearmongers from among the highest ranks of AI professionals.

GAI Goes to Court

You are in business and you are sued.  Why pay high lawyer rates?  Go to Chatbox and hire a lawyer that is free (well, you are paying $20 a months for the Chatbox, but that is somewhat less than for example my hourly rate).

Some lawyers experimented with this idea.  I mean, we lawyers knew that GAI was going to replace some assistants and perhaps some paralegals, but when we thought that GAI was coming for our own jobs we got nervous.

Here is the good news: lawyers are safe.  A couple of experiences:

  1. GAI, please write me a pleading to file in court about XYZ.  Result: a term paper describing what XYZ is.
  2. GAI, please write a court filing which case citations I can file in court.  Result: a filing in proper form, proper heading, etc.  Uh, wait– the cases were made up.  That’s right, they did not exist. This is called hallucinating.

Using GAI by lawyers today run high risk of violating the Rules of Professional Conduct (putting aside that you the client will lose your case).  Lawyers must be competent (fake cases are a no-no.)  Lawyers must maintain client confidentiality (can’t put client information into a system  based on an AI prompt where it can become part of what the system learns and uses for, and discloses to, others).  Lawyers are expressly responsible for the product obtained from delegated sources–the buck stops at the lawyer level even if it is a systems error.

BTW: This is an actual case.  The attorney who submitted the pleading is now facing disciplinary proceedents.

Note: there was speculation that people appearing in lower courts without a lawyer might plug in an earpiece, GAI would hear the proceedings and tell the person what to say or do. Aside from inaccuracy, is this un-athorized practice of law?  If so, by whom?