AI Empowering Phishers, Spammers, and Hackers in Worldwide Reach
The battle between information security experts and cyber criminals is one of swiftly evolving technologies and rapidly changing fronts, where novel developments on the cutting edge of technology determine the winners and losers.
With the ever-increasing rate of technological advancement and stakes ranging from email scams and personal data loss to massive data breaches and financial fraud, the technological tug-of-war for dollars and data will only intensify with the advent of machine intelligence, deep learning, and other advancements in artificial intelligence.
And some hackers, as Gizmodo reports, have already began to leverage AI technologies:
Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.
The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.
Thankfully this was just an experiment, but the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they’re probably already using it, though it’s hard to prove. In July, at Black Hat USA 2017, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss this issue and other looming threats posed by emerging technologies. In a Cylance poll held during the confab, attendees were asked if criminal hackers will use AI for offensive purposes in the coming year, to which 62 percent answered in the affirmative.
The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is already being used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.