GAI in the News Today

At the risk of over-posting, I cannot resist a post of several items current in the news which are just so fascinating and informative of the current state of AI activity.;

*A suit has been filed in California, the land of innovation even in law, claiming against social media firms for damages derived from addiction to that media.  No reason that theory could not be used against AI, either against the developer or against a person or company that put out AI-generated content that caused or led to an addiciton..

*If AI hallucination wrongly accuses a person of something horrible, can that be slander?  If spoken by a person, it could support a legal suit.  But if it is generated by an AI system, who is the speaker? It is the machine but, is it a person?  The designer of the system?  Is for example Microsoft liable to a John Doe when its system hallucinates and says that John Doe stole from his employer?

*Two fascinating items in the NYTimes today:

  1. Nvida is a manufacturer of chips used in AI.  Current interest in AI spiked the share price of this company by 20% yesterday, giving it a market value of $775B, fifth-highest valuation of any company anywhere.  Seems to inferentially support the hype that AI is going to be huge in the lives of human beings.
  2. Advice column on how to ask questions of GAI and get the type of answer you want: start by telling the system the level of information and how it will be used (“Act as if you are an expert in XYZ and you are going to improve your manufacturing of XYZ”); ask the system “Do you need any more information to do this task?”; if the reply is wrong, ask for a redo pointing out error; keep open your threads of inquiry so in follow-up responses the system can see what has transpired and perhaps learn how better to answer.

I note that the last suggestion would require me to be more polite and politic with my AI than I am with my human partners of whom I seek advice.   Sort of counter-intuitive….

 

Shape of Governmental Regulation to Come

It has been suggested that regulation of GAI by government will impair US national security because “China is ahead of us and will invent and distribute and in fact utilize better AI and hamstring US intelligence.” This argument seems far-fetched, particularly since China is ahead of us–in restricting its GAI development to adhere to the Communist Party Platform and not allow any creativity.

But what are the likely areas of future US governmental regulation?

At the Bar Association meeting, the lawyers suggested the following approaches to new laws:

  1. To avoid breach of privacy, prohibit systems from collecting or storing information that is not core to the business of a given company.  For example: you are selling perfume, why do you need the height and weight of the customer?
  2. Grant a private right of action for injured consumers to sue companies who use GAI to harm the consumer.  The government will be swamped with work in this area, private litigants can help.  Class actions create economy of scale by which consumer rights can be asserted.
  3. The FTC currently prohibits “unfair trade practices.”  Define “unfair” for GAI.
  4. Require developers to search out bias in the data being sued to train the GAI systems.

Finally, today’s New York Times carried an article reporting on the suggestions of Microsoft President Brad Smith, who requested government laws covering the following:

  1. A “brake” system to slow down or stop AI programs that are doing harm

2. Clarify obligations of systems developers.

3, Place notices on images and videos if they are machine-generated.

4. Require a government license for developers to release “highly capable” AI systems, which would require follow-up policing of the use of those systems to find abuses.

None of these proposals address the risk of gross misuse of GAI for improper purposes by crooks, politicians or dictatorships.  A crime is a crime, I guess, so if thieves steal by use of GAI and you can prove it, then we are good.  But when politicians cheat, the freedom of political speech has created a world in which control is very difficult.  And, by definition when the government misuses GAI it is difficult fully to trust government law to halt that practice.

Regulation of GAI by Current Law

Do we need new laws to describe what GAI can and cannot do in the hands of businesses?  Yes and no.

Some laws affecting business apply to GAI-generated activity (ads; on-line experience and interface).  Under the Federal Trade Commission rules you cannot undertake unfair trade practices such as tricking or lying to customers.  Most states have similar and sometimes more granular laws.

You cannot defraud customers.  If GAI lies (well, hallucinates) that can be a fraud.

Current law may require transparency (“you are talking to a machine”) and prevent bias.

FTC has the tools to pursue developers of GAI if they build in bias, even if not with intent; so it is not just the user of GAI that can have a legal liability under current law.

But in fact the US is slow to regulate any new technology.  It was stated that the US as a society likes to permit new tech to develop before they limit it by law.  (The EU, without this alleged US inherent approach, is much further along than the US in issuing governmental controls of GAI).

But GAI does present areas of risk, for business and for citizens, that are not fixable under current US regulations.  Next post will discuss the shape of regulations to come.

History of AI–What the Hell is Different About GAI?

A program today at the Boston Bar Association started with a step backwards to try to educate lawyers (slow learners, all of us) about why Generative AI is different from “old” AI and reference to the hype that it is so revolutionary.

First, the history. Forgive the simplicity of how I present it but it worked for me.

We were told that the world of AI started with “rules-based” systems.  That meant that you put into a computer the answers to the questions.  Example: how much is 2+2?  The machine was programmed to follow the rule “if asked what is 2+2” you reply “4.”

The next phase involved extending to “machine learning.”  The machine got experienced and remembered the experiences.  The more things the machine was asked, the more answers it “learned.”

Next came better machine learning called  “deep learning.”  Same thing, more data=more answers.

The present phase of machine learning is what we have now: GAI or generative artificial intelligence. This includes Chatbox CPT, much in the news.  It is a so-called “large language model.”  It is prompted by various human inputs: voice, image, text query. Lots more data, better storage, faster computers that can handle it. The result is ability to answer more questions based on more information more quickly.

I have had several discussions, by the way, in the short time since I started these GAI posts, with people telling me that clearly the revolutionary nature of GAI is hyped and that I should calm down.  You might be interested to know that the very first sentence at the Bar Association program was “GAI will change the world more than anything in human history.”  At the very end, the closing thought was to the effect that the world is overstating the impact of GAI over the next two years and understating it over the next decade.

What will follow will be a series of focused shorter blogs based on discussion at the Bar Association as relate to what businesses should be doing and how lawyers should advise businesses, with related risk scenarios identified.  Oh yes–and some risk observations from the speaker from the Civil Liberties Union (disclosure–I am an ACLU member and once upon a time litigated pro bono for them.)

 

Posted in AI

AI and Company Boards

It is not clear what boards of directors should be doing about AI.  (The answer will be applicable to public companies with anticipated SEC and possible Stock Exchange promulgations and mandatory disclosures, but much the same conundrum will affect managers of private companies which will suffer the same business risks if not the same level of disclosure risk.)

The obvious answers: protect systems from hacking and intrusion; restrict use of AI on company platforms; alert the persons such as the Audit Committee or the Risk Committee evaluating ERM (enterprise risk management) and make sure all parts of the company are included to avoid missing some risk in a functional corporate “silo”: obtain outside consulting support, perhaps direct employment of experts, and include a knowledgeable board of directors member; as their efficacy improves, employ systems that can identify received input as machine-generated (current technology is spotty; ask any University trying to analyze term papers).

The fundamental problems for companies and the persons responsible for running them is that the risk is new, powerful and sourced from outside, at the same time subtle, and in part based on the quality of judgment relying on unreliable input.  The arguable answers from a board of directors are i) don’t develop AI (too late, ii) don’t use it (largely too late and in any event uncontrollable), iii) make sure it is not corrupt at inception or corrupted in transition (good luck with that), (iv) rely on government regulation (whose government when and with what bias, v) ?

Solutions seem to lie beyond board governance actions, yet actions must and will be taken.

What will insurance against AI disruption or fraud look like and at what price point?

Can a board dare risk passing on the clear business benefits of AI in speed, efficiency and ability to eliminate some human overhead?  What will the shareholders say?

META Releases AI for Public Use

Today’s Times reports that META has released its AI codes to the public so that anyone can use them as a base for creating chatboxes.   The argument is that the technology works best if shared and that it is the disseminators of chatboxes that should police and prevent misuse.

Critics naturally pointed out that regulation at the point of creation, which could be controlled by code inventors at for example the Open AI, META and Google level, thus becomes impossible.

The META release is particularly potent because, according to the Times, the version provided to the public includes what is known as “the weight.”  The algorithms have been processed to reflect what was already learned from various data sets; that learning takes time, money, sophistication, specialized chips not generally available.  This release thus not only empowers new chatboxes, but also provides a technological head start to anyone using the technology.  This may advantage the good guys, but not everyone has benign intent.

Although commentary on the details of AI releases best rests with those with specific technical expertise, which excludes this blogger, I am disquieted by the comments of a Berkeley researcher, who drew an analogy to the sale of grenades in a public grocery store. His research disclosed that the AI provided instructions for disposition of dead bodies, as well as generating racist comments supporting the view of Hitler.

Posted in AI

AI Risks

How to think about risk when dealing with a tool of great efficiency and business value?  Two ways: risks identified and widely discussed, and risks no one is talking about with specificity.

On May 16, CEO Altman of Open AI, the company that developed GPT-4 (one of the most powerful AI products yet), addressed a Senate Committee and asked for government regulation to prevent abuse.  Congress, never robust in controlling technology, displayed palpable ignorance as to what AI really was, as Altman asked for government regulation to prevent harm while predicting that lost jobs would be covered by the technology creating new and different jobs.

Note that in March of this year, over 1,000 tech executives and developers issued a letter outlining  certain risks and stated that “profound risks to society and humanity” are possible.  Since publication, number of signers has increased to over 27,000.  Identified risks openly being discussed: AI speaks convincingly but is often wrong as it makes up answers; AI can have programmatic bias or toxic information; disinformation spreads so quickly today that accuracy is extremely critical; job loss at a time when we are not educating for future jobs (and indeed recent data shows that at least American education so slipped in the pandemic that some predict it will take from 13 to to 27 years to get back to where we were just prior to the pandemic); since AI replaces lower skill jobs, where will new jobs come from in a future setting.

The risk of “loss of control” of society, the Terminator / Skynet story line, is indeed deemed unlikely, yet it is contemplated that AI will write its own code; what will a hallucinating tool write down? I refer you to my February 23 post entitled “Chatbox as a Destroyer of Worlds,” reporting on a now-famous columnist’s conversation with a chatbox where the technology attempted to wean the reporter away from his spouse and requite the love felt by the chatbox itself.

Here is what is not mentioned with any real focus: every tool can be used for good or evil.  Powerful tech may be controlled by law but can be hijacked for evil.  And as it is more powerful, the evil will be more powerful.  It can be hacked, modified to remove controls, stolen, used by crooks, used for ransomware, misused by police or military, capture biometric data, used by dictators (already done in China) or by politicians who are so certain they hold the only true answers and morals that they will quash fundamental liberties.

Not surprisingly, two “governments” are already far ahead of the US with respect to regulation of AI.  The EU is about to promulgate rules controlling what is developed.  And, in graphic demonstration of risk, China is moving quickly to make sure that developed AI is politically correct from the perspective of the ruling regime.

Meanwhile, Congress is still at the learning stage, while the President has merely chided senior AI  executives that they are moving into dangerous territory.  And, notwithstanding the 27,000 signatures to the March letter,  which letter suggested a 6 month moratorium on AI development while regulation is considered, no industry moratorium been established.

Posted in AI

AI and Its Place in Social History

Two points here: pace of AI penetration and how to understand that penetration.

To get to 50 million users, the automobile took five decades.  Pokeman Go got there in a few days.  Other apps reached that number in a matter of weeks.  AI will be here in spades immediately, if not yesterday.  You cannot duck it.  Educational systems need to train for it, and I don’t mean just blocking term papers written by chatbox tech; the future of work and information is about to be permanently and radically transformed.  If schools lately have taught to the test, today they must teach to the AI future.

The AI transformation has been characterized as so powerful and disruptive that it will be comparable in impact of the Industrial Revolution.  Stop and think about that.  The Industrial Revolution for the following century, and in many ways still today, has caused massive human misery in social fabric, economics, human values, societal equity.  The fearful have a point to be cautious.  I fear that the closer analogy is to atomic energy: huge pluses, huge risks of a cataclysmic nature.

Posted in AI

AI Understood– Maybe

This is the first in what I envision as a series of posts tracking developments in AI.  Two schools of thought are emerging.  The first is that AI is the future of progress and work and a great boon to civilization.  The second is that AI may well spell the demise of liberty.  Both views share the view that AI is on the march and its primacy is inevitable.

Some simple understandings set forth below:  for these understandings I credit the New England Chapter of National Association of Corporate Directors, which ran a program yesterday to address the promise of AI and the role of the board of directors in handling this tool.  (Disclosure: I am a member of NACD/New England, served on its Board and now am on its Advisory Board.)

Simple takeaways:  AI is not new; ask Alexa, ask Seri, and note robotic telephone sites as simple examples.  Algorithms that drive AI have been with us for very many years.  The explosion of AI power is based upon the convergence of three things: vast explosion of data that can be utilized by the algorithms; more powerful computing that can rapidly handle huge data sets; massive increase in data storage technology at low cost.

Simple facts: Business people see profit and efficiency.  Scientists see pluses and minuses and some are carried away by potential; today’s on-line NY Times carries a story of a Microsoft research paper claiming development of AGI [get used to the new shorthand–this is “artificial general intelligence” which means thinking just like a human being].  Everyone recognizes that AI reaches out for analogy when lacking specific data and hence often generates inaccurate output (kindly called “hallucinations”).

Hype and risk abound.  From time to time I will post about both.  I will not post ChatGPT generated text, in case you were wondering.

SEC Heightens Regulation of Advisers to Private Funds

The SEC has enacted new regulations effecting required reporting on the part of hedge fund and PE fund advisers, some of which are focused on disclosing events within the the management structure and some of which are designed to signal economic concerns.  Various regulations become effective at the six-month and twelve-month marks after enactment.

Aside from rather straight-forward management matters such as removal of general partners and certain fund-termination events, large PE funds will be required to report general or limited partner claw-backs on an annual basis.

In my view, the most interesting regulations are aimed at alerting investors to possible investment performance trouble.  Reports are required as to events which could stress the fund or harm the investor, extraordinary investment losses, margin and default events, major events involving prime broker relationships, and important withdrawal and redemption events.

Commentary from SEC Chair Gensler noted the expansion of the power and impact of these private funds on the broader capital markets.  No doubt true, but these regulations surely also reflect the majority Commission view that tighter regulation and disclosure is required across the investment infrastructure landscape