cnbc.com

Getty Images

Autonomous Devices Could Be Hacked to Become Weapons

This is one of those,”you better pay attention” articles that was found on CNBC.com, written by Ryan Browne. The effectiveness of attack drones and non-stop abilities of robotic soldiers rolling across a battlefield are likely to become even more terrifying.

Already computer programs are learning on their own (machine learning), growing in knowledge and becoming increasingly sophisticated⎯and soon they will be able to figure out when to attack things. Far from the killer robots of “Blade Runner,” machine learning applications are designed to train a computer to fulfill a certain task on its own. Machines are essentially “taught” to complete that task by doing it over and over, learning the many obstacles that could inhibit them.

“Such attacks, which seem like science fiction today, might become reality in the next few years,” Guy Caspi, CEO of cybersecurity start-up Deep Instinct, told CNBC’s new podcast “Beyond the Valley.”

Such technology promises to provide many benefits, such as smoother computing and the automation of many tasks we may, in years’ time, consider manageable without human intervention. But it also has experts worried. Technicians and researchers are cautioning about the threat such technology poses for cybersecurity, which keeps individual, government and corporation computers and data safe from hackers. In February, teams at the University of Oxford and University of Cambridge warned that AI could be used as a tool to hack into drones and autonomous vehicles, and turn them into potential weapons.

“Autonomous cars like Google’s (Waymo) are already using deep learning, can already raid obstacles in the real world,” Caspi said, “so raiding traditional anti-malware system in cyber domain is possible.”

The fear for many is that AI will bring with it a dawn of new forms of cyber breaches that bypass traditional means of countering attacks.

“We’re still in the early days of the attackers using artificial intelligence themselves, but that day is going to come,” warns Nicole Eagan, CEO of cybersecurity firm Darktrace. “And I think once that switch is flipped on, there’s going to be no turning back, so we are very concerned about the use of AI by the attackers in many ways because they could try to use AI to blend into the background of these networks.”

Read more at cnbc.com