Criminals Take Advantage of AI Software to Use Deepfakes, Phishing to Scam Users Online
The use of AI is widening. And perhaps not in the best way. It appears that criminals of all shapes and sizes have begun using AI to help steal your money and more.
Even using AI like they used to do when record companies wanted to get their artists more airplay. The companies would use payola to the DJs to get more money from more record plays. And it’s becoming quite a concern in Europe.
An article from politico.eu points out a lot of criminal activity that AI is helping to provide. This article is part of “The age of surveillance,” a special report on artificial intelligence.
The malicious use of artificial intelligence is growing. Officials are warning against attacks that use deepfake technology, AI-enhanced “phishing” campaigns and software that guesses passwords based on big data analysis.
We have seen videos that use movie stars or politicians in awkward positions or completely faked situations that are not flattering.
“We have crime-as-a-service, we have AI-as-a-service,” said Philipp Amann, head of strategy at EU law enforcement agency’s Europol’s European Cybercrime Centre. “We’ll have AI-for-crime-as-a-service too.”
Most concerning to cybersecurity officials is deepfake technology — which uses reams of photos and videos to develop uncanny likenesses or entirely new avatars. The technology has the power to generate pictures and videos that trick people into thinking they’re looking at the real thing.
“If cybercriminals “manage to come up with ways of assuming your identity or my identity, or create somebody from scratch that doesn’t exist, and they then manage to get through the online verification processes, that’s a huge risk,” Amann said.
The fear of being duped by deepfakes is so present, it has sometimes been used as cover for being fooled in other ways. Politicians across Europe blamed deepfakes when they were tricked into taking meetings with a man posing as Russian opposition figure Alexei Navalny’s chief of staff Leonid Volkov. Russian pranksters said the stunt was theirs and didn’t involve deepfake technology.
With the recent malware attack on the Continental gas pipeline and another against a meat producer in California, the AI-assisted crimes are expected to increase.
“The tools become better in quality every time. There’s also more tools available to detect and analyze [malicious use of AI], but the question is whether the average business already has access to [these tools]” as they gear up to fight the new threat, said Marietje Schaake, policy director at the Cyber Policy Center at Stanford University and a former member of the European Parliament.
read more at politico.eu