New Attack Could Weaponize Neural Networks to Suck Energy from Processing Programs
According to a story in MIT Technology Review, a new type of service attack by hackers could be used to boost the amount of computational processing used by AI systems, causing them to be tied up and unable to finish tasks.
The MIT AI news magazine’s writer Karen Hao explains:
“This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation.”
Even if an attacker had no information about how the system worked, by attempting to max out the system’s energy draw, they were able to slow down processing and sap 20% to 80% of the energy from the system.
“Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices.”
It hasn’t happened yet, but research models like this can help circumvent future attacks by bad actors by identifying when a breach has happened or preventing one from happening in the first place.
read more at technologyreview.com
Leave A Comment