Those responsible for OpenAI, the developer of ChatGPT, have called for regulating “superintelligent” artificial intelligences (AI) by creating a body equivalent to the International Atomic Energy Agency that protects humanity from the risk of creating something that could accidentally destroy it.
Artificial intelligence: where are we going?
Further
OpenAI CEO Sam Altman and co-founders Ilya Sutskever and Greg Brockman have posted a short note on the company’s website calling for an international regulator to start developing criteria to “inspect systems, require audits, check for compliance security standards, and set restrictions on deployment and security levels”, in order to reduce the “existential risk” that these systems may pose.
“In the next 10 years, it is conceivable that in most fields AI systems will exceed the skill level of experts and develop as much productive activity as one of the largest current corporations,” they say in the note. “In terms of possible advantages and disadvantages, superintelligence will be more powerful than other technologies that humanity has had to deal with in the past. Our future can be much more prosperous but to get there we have to manage risks. If the possibility is an existential risk, we cannot limit ourselves to reacting”.
In the short term, the three signatories call for “some degree of coordination” between companies at the forefront of AI research with the aim of ensuring that the development of increasingly powerful models is seamlessly integrated into society while prioritizing safety. Coordination, they write, could be done with a government-led project or with a collective agreement that limits the growth of AI capacity.
Although researchers have been warning about the potential risks of superintelligence for decades, those risks have become more concrete with the acceleration of AI development. According to the Center for AI Security (CAIS), created in the United States to “reduce societal-scale risks from artificial intelligence,” AI development poses eight categories of “catastrophic” risk. ” and “existential”.
“Completely dependent on machines”
Beyond the fear that some feel about the possibility of an AI so powerful that it completely destroys humanity, accidentally or intentionally, the CAIS refers to other more pernicious damages. A world in which AI systems are entrusted with increasing tasks could lead humanity to “lose the ability to govern itself and become completely dependent on machines”, a process of “weakening” by which the small group controlling powerful systems could “turn AI into a centralizing force”, producing “monopolization of benefits” in an eternal caste system between rulers and ruled.
According to those responsible for OpenAI, “people around the world should democratically decide what the limits and default values of AI systems are” to prevent these risks from materializing, although they admit that “it is not yet known how design that mechanism. Still, they say it’s worth the risk to continue developing powerful systems.
“We believe it will lead us to a much better world than we can imagine today (we are already seeing the first examples in areas like education, creative jobs and personal productivity tools),” they write. Halting its development could also be dangerous, they warn. “Because the advantages are so enormous, the cost of building it is decreasing every year, with the number of players developing it increasing rapidly, and it is an inherent part of our current technology path. Stopping it would require something like a global surveillance regime, and even that wouldn’t guarantee it. So we have to get it right.”