The Economist Magazine on GAI

Generative AI is now all over the press, given growing concerns of AI professionals and government, particularly now that the there has been  time to think through various ramifications.  It is not my purpose to aggregate what everyone else is writing, but Economist is a respected, non-US-based news source which does deep dives into complex topics, and the below discusses some Economist analyses.

CHINA:   GAI capabilities  are controlled by the government, and since GAI invents “lies,” it is possible that it could thus  be telling the truth within China by countering government “truth.”  The government requires AI to be “objective” in its training data (as defined by the government) and the generated output must be “true and accurate,”  but Economist (4-22) speculates that strict adherence  to government regulation would “all but halt development of generative AI in China.”  Thus tight enforcement is not anticipated.

JUNIOR JOB ELIMINATION: In the same issue, Economist speculates that potential net elimination of jobs may be overstated, but in fact the lower tier of jobs is at risk.  There always will be need for senior people, to both police AI against  hallucinations and to perform senior work that presumably AI will not be able to replace.  In what I see as an analogy to the observation that COVID in the long run would deteriorate the training of junior people who were physically not on site, the Economist speculated that material reduction in junior employees might interfere with creating the next generation of senior managers.

NET EFFECT ON PROFIT: The May 13 issue contained speculation about the likely impact of AI on  AI industry and general profitability. Citing a recent Goldman Sachs article which assumed every office worker in the world utilized AI to some (stated)  extent, that could add about $430 Billion to annual global enterprise-software revenues, mostly in the US. Sounds like a lot (and it is a lot as a discrete number) but in the US pre-tax total corporate profit as a percentage of GDP would increase from today’s 12% to only 14%.  There also is little chance that a single company will hold a monopoly position in AI, with many competitors and overlapping capability. Sounds like a complex analysis for investment advisors (wonder if they will revert to AI for assistance…).

JOB IMPACT: I have kept a list of articles from Economist and elsewhere about which verticals  will lose most jobs.  Candidates include accountancy, law (though not at the lawyer level), travel agencies, teaching (particularly a foreign language, something I find unlikely as it take tremendous effort to master a foreign language without personal contact, encouragement and pressure), geographers.  And predictions are tricky: in 2013 Oxford University estimated that automation could wipe out 47% of American jobs over the next decade, BUT in fact the rich-world unemployment rate was cut in half over that period. Per Economist 5-13: “[H]istory suggests job destruction happens far more slowly.  The automated telephone switching system –a replacement for human operators — was invent in 1892.  It took until 1921 for the Bell System to install their first fully automated office.”  By 1950 the number of human operators reached its height, and people were substantially eliminated until the 1980s.  And 20% of rich world GDP is construction and farming; few computers can nail a 2×4 or pull turnips from the ground.

ABOUT LAWYERS: Economist quotes an expert in a Boston-based law firm as predicting that the number of lawyers will multiply.  AI contracts now can draft to “the 1,000  most likely edge cases in the first draft and then the parties will argue over it for weeks.”  This strikes me as perhaps unlikely, as I suspect law will move in the direction set by the National Venture Capital Association by standardizing contract forms for the notoriously wide-open business of venture finance, covering the important stuff in language that everyone begrudgingly accepts as necessary to create marketplace efficiency.  But no doubt big changes are coming.  As a lawyer myself, looking at the future of the legal profession in the age of AI, I am reminded of a song sung by Maurice Chevalier in the movie “Gigi”:  “I’m so glad/ I’m not young/anymore.”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AI Industry Endorses the Movie “Terminator”

No, really.

Well, sort of, according to today’s on-line New York Times.

In “Terminator” the computer system “Skynet” became self-aware and decided humans were a nuisance so it started to eradicate them.  This plot line about the risk of Generative AI was viewed by many has an hysterical over-simplification of an impossible problem: we write the code, and the code says “no, don’t do that.”

I now quote verbatim the headline from the Times:

” ‘A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.”

The short text in this warning letter includes the following sentence: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war…”

Surely, you will say something like, “well, some minor programmers who watch too much TV are musing, letting their fantasies control their heads….” You would be wrong.

This letter was signed by over 350 executives and AI professionals, including the President of OpenAI, the CEO of Google Deepmind, and the CEO of Anthropic.  And by two of the three dudes who won the Turing Award for helping invent this stuff.

In the movie, Sarah Connor learned from Arnold Schwarzenegger (of all people!)  the reality of destruction by AI, and she was put into a mental hospital for spouting such fears.  The way this AI thing is developing for us today, there will not be enough hospital beds in the world to house all  the fearmongers from among the highest ranks of AI professionals.

GAI Goes to Court

You are in business and you are sued.  Why pay high lawyer rates?  Go to Chatbox and hire a lawyer that is free (well, you are paying $20 a months for the Chatbox, but that is somewhat less than for example my hourly rate).

Some lawyers experimented with this idea.  I mean, we lawyers knew that GAI was going to replace some assistants and perhaps some paralegals, but when we thought that GAI was coming for our own jobs we got nervous.

Here is the good news: lawyers are safe.  A couple of experiences:

  1. GAI, please write me a pleading to file in court about XYZ.  Result: a term paper describing what XYZ is.
  2. GAI, please write a court filing which case citations I can file in court.  Result: a filing in proper form, proper heading, etc.  Uh, wait– the cases were made up.  That’s right, they did not exist. This is called hallucinating.

Using GAI by lawyers today run high risk of violating the Rules of Professional Conduct (putting aside that you the client will lose your case).  Lawyers must be competent (fake cases are a no-no.)  Lawyers must maintain client confidentiality (can’t put client information into a system  based on an AI prompt where it can become part of what the system learns and uses for, and discloses to, others).  Lawyers are expressly responsible for the product obtained from delegated sources–the buck stops at the lawyer level even if it is a systems error.

BTW: This is an actual case.  The attorney who submitted the pleading is now facing disciplinary proceedents.

Note: there was speculation that people appearing in lower courts without a lawyer might plug in an earpiece, GAI would hear the proceedings and tell the person what to say or do. Aside from inaccuracy, is this un-athorized practice of law?  If so, by whom?

GAI in the News Today

At the risk of over-posting, I cannot resist a post of several items current in the news which are just so fascinating and informative of the current state of AI activity.;

*A suit has been filed in California, the land of innovation even in law, claiming against social media firms for damages derived from addiction to that media.  No reason that theory could not be used against AI, either against the developer or against a person or company that put out AI-generated content that caused or led to an addiciton..

*If AI hallucination wrongly accuses a person of something horrible, can that be slander?  If spoken by a person, it could support a legal suit.  But if it is generated by an AI system, who is the speaker? It is the machine but, is it a person?  The designer of the system?  Is for example Microsoft liable to a John Doe when its system hallucinates and says that John Doe stole from his employer?

*Two fascinating items in the NYTimes today:

  1. Nvida is a manufacturer of chips used in AI.  Current interest in AI spiked the share price of this company by 20% yesterday, giving it a market value of $775B, fifth-highest valuation of any company anywhere.  Seems to inferentially support the hype that AI is going to be huge in the lives of human beings.
  2. Advice column on how to ask questions of GAI and get the type of answer you want: start by telling the system the level of information and how it will be used (“Act as if you are an expert in XYZ and you are going to improve your manufacturing of XYZ”); ask the system “Do you need any more information to do this task?”; if the reply is wrong, ask for a redo pointing out error; keep open your threads of inquiry so in follow-up responses the system can see what has transpired and perhaps learn how better to answer.

I note that the last suggestion would require me to be more polite and politic with my AI than I am with my human partners of whom I seek advice.   Sort of counter-intuitive….

 

Shape of Governmental Regulation to Come

It has been suggested that regulation of GAI by government will impair US national security because “China is ahead of us and will invent and distribute and in fact utilize better AI and hamstring US intelligence.” This argument seems far-fetched, particularly since China is ahead of us–in restricting its GAI development to adhere to the Communist Party Platform and not allow any creativity.

But what are the likely areas of future US governmental regulation?

At the Bar Association meeting, the lawyers suggested the following approaches to new laws:

  1. To avoid breach of privacy, prohibit systems from collecting or storing information that is not core to the business of a given company.  For example: you are selling perfume, why do you need the height and weight of the customer?
  2. Grant a private right of action for injured consumers to sue companies who use GAI to harm the consumer.  The government will be swamped with work in this area, private litigants can help.  Class actions create economy of scale by which consumer rights can be asserted.
  3. The FTC currently prohibits “unfair trade practices.”  Define “unfair” for GAI.
  4. Require developers to search out bias in the data being sued to train the GAI systems.

Finally, today’s New York Times carried an article reporting on the suggestions of Microsoft President Brad Smith, who requested government laws covering the following:

  1. A “brake” system to slow down or stop AI programs that are doing harm

2. Clarify obligations of systems developers.

3, Place notices on images and videos if they are machine-generated.

4. Require a government license for developers to release “highly capable” AI systems, which would require follow-up policing of the use of those systems to find abuses.

None of these proposals address the risk of gross misuse of GAI for improper purposes by crooks, politicians or dictatorships.  A crime is a crime, I guess, so if thieves steal by use of GAI and you can prove it, then we are good.  But when politicians cheat, the freedom of political speech has created a world in which control is very difficult.  And, by definition when the government misuses GAI it is difficult fully to trust government law to halt that practice.

Regulation of GAI by Current Law

Do we need new laws to describe what GAI can and cannot do in the hands of businesses?  Yes and no.

Some laws affecting business apply to GAI-generated activity (ads; on-line experience and interface).  Under the Federal Trade Commission rules you cannot undertake unfair trade practices such as tricking or lying to customers.  Most states have similar and sometimes more granular laws.

You cannot defraud customers.  If GAI lies (well, hallucinates) that can be a fraud.

Current law may require transparency (“you are talking to a machine”) and prevent bias.

FTC has the tools to pursue developers of GAI if they build in bias, even if not with intent; so it is not just the user of GAI that can have a legal liability under current law.

But in fact the US is slow to regulate any new technology.  It was stated that the US as a society likes to permit new tech to develop before they limit it by law.  (The EU, without this alleged US inherent approach, is much further along than the US in issuing governmental controls of GAI).

But GAI does present areas of risk, for business and for citizens, that are not fixable under current US regulations.  Next post will discuss the shape of regulations to come.

History of AI–What the Hell is Different About GAI?

A program today at the Boston Bar Association started with a step backwards to try to educate lawyers (slow learners, all of us) about why Generative AI is different from “old” AI and reference to the hype that it is so revolutionary.

First, the history. Forgive the simplicity of how I present it but it worked for me.

We were told that the world of AI started with “rules-based” systems.  That meant that you put into a computer the answers to the questions.  Example: how much is 2+2?  The machine was programmed to follow the rule “if asked what is 2+2” you reply “4.”

The next phase involved extending to “machine learning.”  The machine got experienced and remembered the experiences.  The more things the machine was asked, the more answers it “learned.”

Next came better machine learning called  “deep learning.”  Same thing, more data=more answers.

The present phase of machine learning is what we have now: GAI or generative artificial intelligence. This includes Chatbox CPT, much in the news.  It is a so-called “large language model.”  It is prompted by various human inputs: voice, image, text query. Lots more data, better storage, faster computers that can handle it. The result is ability to answer more questions based on more information more quickly.

I have had several discussions, by the way, in the short time since I started these GAI posts, with people telling me that clearly the revolutionary nature of GAI is hyped and that I should calm down.  You might be interested to know that the very first sentence at the Bar Association program was “GAI will change the world more than anything in human history.”  At the very end, the closing thought was to the effect that the world is overstating the impact of GAI over the next two years and understating it over the next decade.

What will follow will be a series of focused shorter blogs based on discussion at the Bar Association as relate to what businesses should be doing and how lawyers should advise businesses, with related risk scenarios identified.  Oh yes–and some risk observations from the speaker from the Civil Liberties Union (disclosure–I am an ACLU member and once upon a time litigated pro bono for them.)

 

Posted in AI

AI and Company Boards

It is not clear what boards of directors should be doing about AI.  (The answer will be applicable to public companies with anticipated SEC and possible Stock Exchange promulgations and mandatory disclosures, but much the same conundrum will affect managers of private companies which will suffer the same business risks if not the same level of disclosure risk.)

The obvious answers: protect systems from hacking and intrusion; restrict use of AI on company platforms; alert the persons such as the Audit Committee or the Risk Committee evaluating ERM (enterprise risk management) and make sure all parts of the company are included to avoid missing some risk in a functional corporate “silo”: obtain outside consulting support, perhaps direct employment of experts, and include a knowledgeable board of directors member; as their efficacy improves, employ systems that can identify received input as machine-generated (current technology is spotty; ask any University trying to analyze term papers).

The fundamental problems for companies and the persons responsible for running them is that the risk is new, powerful and sourced from outside, at the same time subtle, and in part based on the quality of judgment relying on unreliable input.  The arguable answers from a board of directors are i) don’t develop AI (too late, ii) don’t use it (largely too late and in any event uncontrollable), iii) make sure it is not corrupt at inception or corrupted in transition (good luck with that), (iv) rely on government regulation (whose government when and with what bias, v) ?

Solutions seem to lie beyond board governance actions, yet actions must and will be taken.

What will insurance against AI disruption or fraud look like and at what price point?

Can a board dare risk passing on the clear business benefits of AI in speed, efficiency and ability to eliminate some human overhead?  What will the shareholders say?

META Releases AI for Public Use

Today’s Times reports that META has released its AI codes to the public so that anyone can use them as a base for creating chatboxes.   The argument is that the technology works best if shared and that it is the disseminators of chatboxes that should police and prevent misuse.

Critics naturally pointed out that regulation at the point of creation, which could be controlled by code inventors at for example the Open AI, META and Google level, thus becomes impossible.

The META release is particularly potent because, according to the Times, the version provided to the public includes what is known as “the weight.”  The algorithms have been processed to reflect what was already learned from various data sets; that learning takes time, money, sophistication, specialized chips not generally available.  This release thus not only empowers new chatboxes, but also provides a technological head start to anyone using the technology.  This may advantage the good guys, but not everyone has benign intent.

Although commentary on the details of AI releases best rests with those with specific technical expertise, which excludes this blogger, I am disquieted by the comments of a Berkeley researcher, who drew an analogy to the sale of grenades in a public grocery store. His research disclosed that the AI provided instructions for disposition of dead bodies, as well as generating racist comments supporting the view of Hitler.

Posted in AI

AI Risks

How to think about risk when dealing with a tool of great efficiency and business value?  Two ways: risks identified and widely discussed, and risks no one is talking about with specificity.

On May 16, CEO Altman of Open AI, the company that developed GPT-4 (one of the most powerful AI products yet), addressed a Senate Committee and asked for government regulation to prevent abuse.  Congress, never robust in controlling technology, displayed palpable ignorance as to what AI really was, as Altman asked for government regulation to prevent harm while predicting that lost jobs would be covered by the technology creating new and different jobs.

Note that in March of this year, over 1,000 tech executives and developers issued a letter outlining  certain risks and stated that “profound risks to society and humanity” are possible.  Since publication, number of signers has increased to over 27,000.  Identified risks openly being discussed: AI speaks convincingly but is often wrong as it makes up answers; AI can have programmatic bias or toxic information; disinformation spreads so quickly today that accuracy is extremely critical; job loss at a time when we are not educating for future jobs (and indeed recent data shows that at least American education so slipped in the pandemic that some predict it will take from 13 to to 27 years to get back to where we were just prior to the pandemic); since AI replaces lower skill jobs, where will new jobs come from in a future setting.

The risk of “loss of control” of society, the Terminator / Skynet story line, is indeed deemed unlikely, yet it is contemplated that AI will write its own code; what will a hallucinating tool write down? I refer you to my February 23 post entitled “Chatbox as a Destroyer of Worlds,” reporting on a now-famous columnist’s conversation with a chatbox where the technology attempted to wean the reporter away from his spouse and requite the love felt by the chatbox itself.

Here is what is not mentioned with any real focus: every tool can be used for good or evil.  Powerful tech may be controlled by law but can be hijacked for evil.  And as it is more powerful, the evil will be more powerful.  It can be hacked, modified to remove controls, stolen, used by crooks, used for ransomware, misused by police or military, capture biometric data, used by dictators (already done in China) or by politicians who are so certain they hold the only true answers and morals that they will quash fundamental liberties.

Not surprisingly, two “governments” are already far ahead of the US with respect to regulation of AI.  The EU is about to promulgate rules controlling what is developed.  And, in graphic demonstration of risk, China is moving quickly to make sure that developed AI is politically correct from the perspective of the ruling regime.

Meanwhile, Congress is still at the learning stage, while the President has merely chided senior AI  executives that they are moving into dangerous territory.  And, notwithstanding the 27,000 signatures to the March letter,  which letter suggested a 6 month moratorium on AI development while regulation is considered, no industry moratorium been established.

Posted in AI