A group of 350 executives from the main artificial intelligence developers, academics and researchers who are experts in this technology have signed a new manifesto warning of “the most serious risks of advanced AI”. In a brief 27-word statement, the signatories declare that this technology represents a “danger of extinction” for humanity that should be considered like pandemics or the nuclear bomb.
Erik Larson: “Current advances do not bring us any closer to having human-like artificial intelligence”
“Mitigate AI extinction risk should be a global priority alongside other societal risks like pandemics and nuclear war,” the full petition states. Among the signatories is the entire staff of OpenAI, the company that has developed ChatGPT and is now putting pressure on the international community to regulate this technology. Among the promoters of this letter is Sam Altman, its CEO, who was in Europe last week and met with Pedro Sánchez, and 20 of his executives and researchers.
Also on the list of signatories is Kevin Scott, head of Microsoft Technology, and Demis Hassabis, leader of Google DeepMind (the multinational’s artificial intelligence research department). Google is the company that contributes the most headings to the manifesto, with 38 executives, researchers or university professors related to the company. There are also representatives of other smaller developers such as Anthropic, Stability AI or Inflection AI.
This is the second action of this type in a matter of two months that is carried out internationally. In the previous one, published at the end of March, hundreds of businessmen and academics expressed themselves in similar terms about the dangers that this technology represents if it is not regulated soon. In the text that serves as an introduction to the petition to equate AI with nuclear war, the signatories of the letter published today acknowledge that “journalists, political leaders and the general public are increasingly debating a wide spectrum of important risks and urgent risks of AI”, but that even so, in his opinion, “it can be difficult to express concern about some of the most serious risks of advanced AI”.
Among the 350 signatories to this manifesto are two Spaniards: Helena Matute, Professor of Psychology at the University of Deusto; and Ramon Carbó-Dorca, theoretical chemist and emeritus professor at the University of Girona. “I think it is very important that AI does not continue to grow uncontrollably, that our leaders do something, and that we all become aware that it is important, it is a very dangerous weapon,” Matute explained in statements to elDiario.es. “We must reach an agreement at a global level on a minimum of security, which today is not guaranteed by anyone, and which will not be achieved from one day to the next. You have to prevent. Many things can go wrong. We must act, as has been done with the atomic bomb, human cloning, and other technologies that involve great risks, ”she requests.
The call for regulation for artificial intelligence by these entrepreneurs and academics coincides with a large-scale investigation in the EU into possible privacy violations that OpenAI may have carried out with ChatGPT. The continent’s data protection regulators suspect that Europeans’ personal information has been used without their consent to train the system.
On his European tour last week, Altman hinted that if he disagrees with the outcome of the investigation and the content of the AI Regulation being finalized in Brussels, he could order ChatGPT to be withdrawn from the EU. Google has done the same with Bard, its analogous system, which has been deployed in 180 countries but not in Europe.