AI Research’s Brightest Minds Warn of Malicious Use, Risk-Fraught Future

A team of preeminent AI researchers recently released a report addressing troubling possibilities of AI misuse in the near future, focusing on how the technology will empower cybercriminals and other malicious actors to use AI in novel and potentially disastrous ways.

Consisting of 26 researchers spanning academic, industrial, and civil backgrounds and representing 14 eminent institutions including the Oxford-based Future of Humanity Institute, OpenAI, the Electronic Frontier Foundation, and University of Cambridge’s Centre for the Study of Existential Risk, the team focused on what tech news site Gizmodo summarizes as a “grim future” for AI.

Entitled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, the report defines malicious use “to include all practices that are intended to compromise the security of individuals, groups, or a society” and highlights how cybercriminals, rogue actors, and even corporate and state powers might employ AI in the future for potentially disastrous consequences including cyber attacks, data thefts, violence, mass surveillance and profiling, and political misinformation campaigns—the latter a prescient concern of late after news broke of the Cambridge Analytica scandal.


Above: Large-scale data mining and analysis is already being used to create targeted propaganda, threats which the study concludes will only grow as powerful machine learning technology increases in scope and accessibility.

Focusing on, “AI technologies that are currently available (at least as initial research and development demonstrations) or are plausible in the next 5 years,” some of the study’s proposed malicious AI uses include how the technology will enhance existing cybercrime methods such as spamming, hacking and phishing through automation and more intelligent malware systems better able to emulate organic human traffic or otherwise defeat defensive AI systems, which the study notes will also become increasingly vulnerable to adversarial AI and “data poisoning.”

Additionally, the report speculates on how emerging AI technologies will introduce heretofore unimaginable risks to individuals and society. The study provides examples such as the development of autonomous weapons systems, the misuse of AI to disable or hijack drones and autonomous vehicles, or even the use and exploitation of sophisticated state surveillance to monitor and influence entire populations.


Above: AI technology can already emulate real people with a fair degree of accuracy. The study suggests similar techniques could be used as part of political misinformation and/or to create highly tailored “spear-phishing” attacks on individuals using simulated human communication.

While the report paints a potentially dire picture of AI misuse—and doesn’t explore other equally frightful indirect impacts of AI, such as mass automation-driven unemployment— it also provides the potential for a safer way forward. Researchers recommend four “high-level” preventative measures to minimize the extent and impact of malicious AI use in the coming years:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Additionally, the report defines broad targets for research and inquiry to assist in achieving the above, including, “learning from and with the cybersecurity community,” “exploring different openness models [in ML/AI],” “promoting a culture of responsibility,” and finally, “developing technological and policy solutions.”

For the full text of The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, including more specific examples of likely risks and preventative measures, read more at maliciousaireport.com.