No, really.
Well, sort of, according to today’s on-line New York Times.
In “Terminator” the computer system “Skynet” became self-aware and decided humans were a nuisance so it started to eradicate them. This plot line about the risk of Generative AI was viewed by many has an hysterical over-simplification of an impossible problem: we write the code, and the code says “no, don’t do that.”
I now quote verbatim the headline from the Times:
” ‘A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.”
The short text in this warning letter includes the following sentence: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war…”
Surely, you will say something like, “well, some minor programmers who watch too much TV are musing, letting their fantasies control their heads….” You would be wrong.
This letter was signed by over 350 executives and AI professionals, including the President of OpenAI, the CEO of Google Deepmind, and the CEO of Anthropic. And by two of the three dudes who won the Turing Award for helping invent this stuff.
In the movie, Sarah Connor learned from Arnold Schwarzenegger (of all people!) the reality of destruction by AI, and she was put into a mental hospital for spouting such fears. The way this AI thing is developing for us today, there will not be enough hospital beds in the world to house all the fearmongers from among the highest ranks of AI professionals.