The Down Side Of AI Could Be Dangerous If Not Addressed By Researchers Immediately
Since we started publishing Seeflection.com, we have been focused on the positive growth and use of AI in all areas of life and business. Most recently, the downsides are becoming more apparent.
Issues plaguing AI range from the impact of trolls and spambots on social media to defense threats. A recent article on spectrum.ieee.org provides several frightening scenarios for which AI is being used.
AI problems, depending on how it is developed and used, include job losses, algorithmic discrimination, and a host of other possibilities. Over the last decade, the AI community has grown increasingly aware of the need to innovate more responsibly. Today, there is no shortage of “responsible AI” initiatives—more than 150—which aim to provide ethical guidance to AI practitioners and to help them foresee and mitigate the possible negative impacts of their work.
Blind Spots
The problem is that the vast majority of these initiatives share the same blind spot. They address how AI could affect areas such as health care, education, mobility, employment, and criminal justice, but they ignore international peace and security. The risk that peaceful applications of AI could be misused for political disinformation, cyberattacks, terrorism, or military operations is rarely considered, unless very superficially.
This is a major gap in the conversation on responsible AI that must be filled. But by who?
Misuse of Civilian AI
Civilian technologies have long been a go-to for malicious actors because misusing such technology is generally much cheaper and easier than designing or accessing military-grade technologies. There is no shortage of real-life examples, a famous one being the Islamic State’s use of hobby drones as both explosive devices and tools to shoot footage for propaganda films.
The misuse of civilian technology is not a problem that states can easily address on their own, or purely through intergovernmental processes. However, AI researchers can be a first line of defense, as they are among the best placed to evaluate how their work may be misused.
We have shared just a few thoughts from the article by Vincent Boulanin and Charles Ovink. They were clear and succinct on the biggest issues being addressed in the AI world. However, the biggest takeaway is that AI coders, researchers, and the like are the last wall of defense against malicious AI use.
We’re already seeing examples of the weaponization of peaceful AI. The use of deepfakes, for example, demonstrates that the risk is real and the consequences are potentially far-ranging. Less than 10 years after Ian Goodfellow and his colleagues designed the first generative adversarial network, GANs have become tools of choice for cyberattacks and disinformation—and now, for the first time, in warfare.
During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian president Volodymyr Zelenskyy telling his troops to surrender.
And yes you are going to have to be alert and cautious when it comes to dealing with AI on an everyday basis. Especially if you are in the AI business to do the most good for the most people, as AI should be used.
read more at spectrum.ieee.org
Leave A Comment