This is an AI-generated image of a nuclear mushroom cloud using the Midjourney platform. Can new legislation help stop such an event?

Legislators Propose Bi-Partisan Bill to Regulate Use of AI With Nuclear Weapons

What was once a popular dramatic plot for sci-fi movies—the firing of nuclear weapons via computer—has been revived as a dystopian possibility now that AI is being integrated into systems.

It turns out a bi-partisan group of our elected representatives has seen the same movies we have, and they have decided to make nuclear war illegal. At least one that gets started by a rogue AI.

Illegal? Really?

Benj Edwards, an AI and Machine Learning Reporter for Ars Technica, wrote about the proposed new legislation.

On Wednesday, U.S. Senator Edward Markey (D-Mass.) and Representatives Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.) announced bipartisan legislation that seeks to prevent an AI system from making nuclear launch decisions. The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for launching any nuclear weapon by an automated system without “meaningful human control.”

Could it pass both houses of Congress? This bill has some heavy-weight, well-known names behind it. Cosponsors of the Block Nuclear Launch by Autonomous Artificial Intelligence Act in the Senate include Bernie Sanders (I-Vt.) and Elizabeth Warren (D-Mass.).

“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” Markey said in a news release. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”

The new bill aims to codify the Defense Department principle into law, and it also follows the recommendation of the National Security Commission on Artificial Intelligence, which called for the US to affirm its policy that only human beings can authorize the employment of nuclear weapons.

Will It Be Enough?

The very idea of giving control of our nuclear arsenal to an algorithm is absolutely frightening but is unquestionably in use currently in all missile command centers around the planet.

Without AI running most of the show, our country would be way behind on every level of national security. There is no doubt our usual adversaries are developing AI for war and this legislation won’t stop their nuclear threats, but it will decrease the chance of our missiles being fired by a ‘hallucinating’ algorithm.

And it is most likely the revelations over the ability of ChatGPT4 that has lit a fire under the Senators and Representatives.

While GPT-4 isn’t feared to launch a nuclear strike, a group of AI researchers that evaluate the capabilities of today’s most popular large language models for OpenAI fear that more advanced future AI systems may be a threat to human civilization. Some of that fear has transferred to the broader populace, despite worries over existential threats from AI remaining controversial in the broader machine learning community.

“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” Buck said in a statement. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”

The article doesn’t say how likely this bill is to pass into actual law. This same group has reintroduced a bill that would forbid the President from making the final decision to launch nuclear weapons before Congressional approval.

read more at arstechnica.com