Human Rights Groups Say Summit Meant to Reign in Killer Robots, AI War Tech Fails
Gizmodo.com reported on a 50-nation gathering in the Netherlands this week that was supposed to set standards for the responsible development and use of military technology based on AI. But according to human rights groups and non-proliferation experts interviewed, the event was a disaster.
“After two days of in-depth talks, panels, and presentations produced by around 2,500 AI experts and industry leaders, the REAIM (get it?) summit ended in a non-legally binding ‘call to action’ over the responsible development, deployment and use of military AI. The attendees also agreed to establish a ‘Global Commission on AI.’ That might sound lofty, but in reality, those initiatives are limited to ‘raise awareness’ about how the technology can be manufactured responsibly. Meaningful talks of actually reducing or limiting AI weapons were essentially off the table.”
A group called “Stop Killer Robots Campaign” told the website that the event offered a “vague and incorrect vision” of military use of AI without any reason for clarity on rules or limitations. Safe Ground, an Australian rights group, called the entire summit a “missed opportunity.”
The United States, which has opposed an international AI weapons treaty, issued a 12-point political declaration outlining its “responsible” autonomous systems strategy. The declaration, which comes just weeks after a controversial new Department of Defense directive on AI, says all AI systems should adhere to international human rights laws and have “appropriate levels of human judgment.” The rights groups said the language was meaningless.
“This Declaration falls drastically short of the international framework that the majority of states within UN discussions have called for,” Stop Killer Robots said in a statement. “It does not see the need for legally binding rules, and instead permits the development and use of Autonomous Weapons Systems, absent lines of acceptability.”
The summit was described as a major step backward because the majority of the 125 states represented in the U.N.’s Convention on Certain Conventional Weapons wanted to ban autonomous weapons development during a conference last year. UN Secretary-General António Guterres released a statement around the same time saying such systems should be prohibited under international law. Those efforts failed largely due to the U.S., China, and Russia, which are all in favor of the development of these weapons. Now efforts are to outline a more acceptable set of rules for using them.
“During his speech, Palantir CEO Alex Karp reportedly said the Ukrainian military’s recent use of AI to positively identify target on the battlefield had moved the question of AI weapons away from ‘highly erudite ethics discussion,’ to something with immediate real world consequences. The CEO previously said Ulkranians are using Palantir’s controversial data analytics software to carry out some of that targeting.”
A YouTube video about the conference gives an overview of the speakers and what they covered.
read more at gizmodo.com