AI Risks

How to think about risk when dealing with a tool of great efficiency and business value?  Two ways: risks identified and widely discussed, and risks no one is talking about with specificity.

On May 16, CEO Altman of Open AI, the company that developed GPT-4 (one of the most powerful AI products yet), addressed a Senate Committee and asked for government regulation to prevent abuse.  Congress, never robust in controlling technology, displayed palpable ignorance as to what AI really was, as Altman asked for government regulation to prevent harm while predicting that lost jobs would be covered by the technology creating new and different jobs.

Note that in March of this year, over 1,000 tech executives and developers issued a letter outlining  certain risks and stated that “profound risks to society and humanity” are possible.  Since publication, number of signers has increased to over 27,000.  Identified risks openly being discussed: AI speaks convincingly but is often wrong as it makes up answers; AI can have programmatic bias or toxic information; disinformation spreads so quickly today that accuracy is extremely critical; job loss at a time when we are not educating for future jobs (and indeed recent data shows that at least American education so slipped in the pandemic that some predict it will take from 13 to to 27 years to get back to where we were just prior to the pandemic); since AI replaces lower skill jobs, where will new jobs come from in a future setting.

The risk of “loss of control” of society, the Terminator / Skynet story line, is indeed deemed unlikely, yet it is contemplated that AI will write its own code; what will a hallucinating tool write down? I refer you to my February 23 post entitled “Chatbox as a Destroyer of Worlds,” reporting on a now-famous columnist’s conversation with a chatbox where the technology attempted to wean the reporter away from his spouse and requite the love felt by the chatbox itself.

Here is what is not mentioned with any real focus: every tool can be used for good or evil.  Powerful tech may be controlled by law but can be hijacked for evil.  And as it is more powerful, the evil will be more powerful.  It can be hacked, modified to remove controls, stolen, used by crooks, used for ransomware, misused by police or military, capture biometric data, used by dictators (already done in China) or by politicians who are so certain they hold the only true answers and morals that they will quash fundamental liberties.

Not surprisingly, two “governments” are already far ahead of the US with respect to regulation of AI.  The EU is about to promulgate rules controlling what is developed.  And, in graphic demonstration of risk, China is moving quickly to make sure that developed AI is politically correct from the perspective of the ruling regime.

Meanwhile, Congress is still at the learning stage, while the President has merely chided senior AI  executives that they are moving into dangerous territory.  And, notwithstanding the 27,000 signatures to the March letter,  which letter suggested a 6 month moratorium on AI development while regulation is considered, no industry moratorium been established.

Comments are closed.