Recent reports and testimony about possible abuses of AI to create bioweapons have been conveyed to Congress. (Source: Adobe Stock)

Experts Sound Alarms over Potential for Bioweapons Development by Bad Actors Using AI

From disinformation to discrimination in hiring apps to taking people’s jobs, the vast majority of concerns related to AI have been identified and are beginning to be addressed by government and civil rights groups. However, the potential for AI to create biological weapons of mass destruction is a real concern among those who are involved in research on how it can be used for reverse engineering viruses for treatment and creating pharmaceutical drugs.

In testimony to a U.S. Senate committee this week, three AI research leaders warned that “rogue states or terrorists” could use AI tech to create bioweapons, according to a story on

“Yoshua Bengio, an AI professor at the University of Montreal who is known as one of the fathers of modern AI science, said the United States should push for international cooperation to control the development of AI, outlining a regime similar to international rules on nuclear technology. Dario Amodei, the chief executive of AI start-up Anthropic, said he fears cutting edge AI could be used to create dangerous virus and other bioweapons in as little as two years. And Stuart Russell, a computer science professor at the University of California at Berkeley, said the way AI works means it is harder to fully understand and control than other powerful technologies.”

Their concerns are credible, considering how the U.S. military is moving forward full force in adopting AI as part of its defense strategy, going far beyond drones and GPS analysis to, for instance, the U.S. Navy implementing robotics in autonomous patrol boat fleets and using AI in collecting sound data to distinguish smugglers from oil tankers, according to a story.

The worries about dictators incentivizing bad actors to develop bioweapons seem far more ominous in light of the Covid epidemic, which showed how devastating the spread of a highly contagious virus could be. Debate continues today as to whether the virus originated in the Wuhan Institute of Virology, and if it did, whether it was engineered or had mutated on its own.

A report in the Bulletin of Atomic Scientists explores the range of possibilities in creating bioweapons. It concludes that the world needs a global initiative: “a World Economic Forum-like ‘network of influence,’ composed of exceptional individuals from business, academia, politics, defense, civil society, and international organizations, to act as a global Board of Trustees to oversee developments relevant to biological threats in science, business, defense, and politics and to decide on concerted cross-sector actions.”

According to sources in a story on, some of the “existential risks” concerning general AI are overhyped, but regulations are still needed ahead of the advances, especially in terms of the defense industry. Sam Altman, CEO of OpenAI, testified to Congress that AI industry could cause significant harm to the world—precipitating calls for a national agency to prevent AI harm. It’s an open question as to whether Congress will unite on that issue.