The United States and China, the global pioneers in the development of artificial intelligence in constant competition, have joined together for the first time in a joint declaration with the European Union, the United Kingdom, India and other powers to “work together” on this technology. in a way that is “safe” and adheres to minimum regulations.
Rishi Sunak’s Government published this Wednesday the so-called Bletchley declaration, the place in England where the AI summit is held and where Alan Turing invented the first machine to decipher codes encrypted by Germany in World War II. It is signed by 28 countries present at the meeting for a generic commitment, but it has the value of being the first where the delegation from Washington and Beijing share the negotiating table and reach a written pact. Among those signing the text are Spain and other large EU countries, India, Japan, Brazil and Korea.
“We are determined to work together inclusively to ensure human-centric, trustworthy and responsible AI that is safe,” the text says. The governments announce that they will continue to meet – they already have two other meetings, in France and Korea – and will focus in particular “on the wide range of risks posed by AI.”
“There is potential for serious, even catastrophic, deliberate or accidental harm arising from the most significant capabilities of these AI models. Given the rapid and uncertain pace of change in AI, and in the context of accelerating investment in technology, we affirm that deepening our knowledge of these potential risks and the actions to address them is especially urgent,” the text says.
Although only the European Union has advanced comprehensive legislation to regulate and limit AI, participating countries commit to considering rules that balance benefits and risks, and insist on international cooperation to apply “common principles and codes of conduct.”
The statement refers in particular to the security of systems that are “unusually powerful and potentially harmful” for which governments have to verify the security, for example through testing. They also ensure that they will be transparent “as appropriate to the context” and will have plans to “measure, control and mitigate potentially dangerous capabilities” and prevent the “misuse” or lack of control of these technologies.
The armies
However, China has not wanted to participate, for the moment, in a more specific political declaration on the use of artificial intelligence in the armies presented by the United States and signed by thirty other countries, most of them European, to control and share the development. of weapons with artificial intelligence that is already happening.
Vice President Kamala Harris announced the signing of a joint declaration, which also includes Spain. “Military use of artificial intelligence must be ethical, responsible and increase international security,” she says in the statement. “In particular, the use of artificial intelligence in armed conflict must respect States’ obligations under international humanitarian law… Military use of artificial intelligence capabilities must be held accountable, including such use during military operations within a responsible human chain of command and control. “The principled approach to military use of AI must include careful consideration of risks and benefits, and must also minimize unintended biases and accidents.”
The United States promoted this declaration in February and it is now signed by the majority of EU members, the United Kingdom, Australia, Japan, Singapore and some African countries such as Morocco, Liberia and Malawi. There are not China, Russia, India or Pakistan, which have the largest armies in the world along with the United States.
The text insists on the importance of human control of automatic capabilities, respect for the legal framework and the security systems that must accompany any development. “States should apply appropriate safeguards to mitigate the risks of errors in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example by removing or disabling deployed systems when they have demonstrated unintended behavior.” .
Today, there are already autonomous weapons deployed on the battlefield, particularly in Ukraine, with the use of drones for surveillance, defense and attack. In many cases, these are commercial drones. The Ukrainian Government denounced in September that the drones that Russia uses for its attacks in Ukraine are of Iranian origin but have components manufactured in Europe.
China and Russia, in particular, are developing military AI capabilities and neither is a party to this political declaration. Furthermore, the main robotics manufacturers that can sell autonomous weapons are not part of these commitments at the moment, as some experts point out.
The regulation
The European Union is already negotiating a proposal to regulate artificial intelligence and Spain, which presides over the EU Council this semester, aspires to close an agreement for the regulation in its turn. Regulation could begin to be applied in 2026. The United States continues with a more lax policy to encourage innovation, although it also promises to approve new laws in the coming months.
Vice President Harris also announced this Wednesday the creation of a public institute for the security of artificial intelligence in the United States Department of Commerce and an action guide for companies and public authorities, a less strict step than regulation, although a prelude of new laws. The regulation consists of the obligation to test the most advanced artificial intelligence systems and subject them to review by the federal government before their use to prevent, for example, the production of chemical or biological weapons. In addition, there are recommendations aimed in particular at combating scams and misinformation, such as limits on automatic audio production that can lead to misleading phone calls and the inclusion of a warning on images and videos produced with artificial intelligence.
“History has taught us that in the absence of strong government regulation and oversight, some technology companies choose to prioritize profit over the well-being of their customers, the safety of our communities, and the stability of our democracies,” Harris said in a speech at the US embassy in London and before joining the summit at Bletchley Park. “An important way to address these challenges, in addition to what we have already done today, is through legislation. Legislation that reinforces the security of artificial intelligence without stopping innovation.”
Regarding this supposed dilemma, Harris insisted that these are not incompatible objectives: “We reject the false choice between protecting the public or promoting innovation. We can and must do both. And we must do them quickly.”