What You See May Not Be What You Get in Satellite Images
Imagine you are watching an intense movie thriller about a possible conflict erupting between China and the U.S. The camera focuses on the figure of a man watching a TV screen showing an image of the Sohae Satellite Launching Station in Tongchang-ri, North Korea during a news program at the Seoul Railway Station in Seoul, South Korea, Wednesday, March 6, 2019.
Does the image being studied contain fake imagery? What if it did? Could that be used as a weapon? Absolutely. Here’s how:
Step 1: Use AI to make undetectable changes to outdoor photos.
Step 2: release them into the open-source world and enjoy the chaos.
China is the acknowledged leader in using an emerging technique called generative adversarial networks to trick computers into seeing objects in landscapes or in satellite images that aren’t there, says Todd Myers, automation lead and Chief Information Officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency.
Sounds like a James Bond movie, but it’s all quite real, according to an article by Patrick Tucker in defenseone.com.
“The Chinese are well ahead of us. This is not classified info,” Myers said Thursday at the second annual Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using GANs—which are generative adversarial networks—to manipulate scenes and pixels to create things for nefarious reasons.”
Myers thinks it would be easy for an adversary to fool computer-assisted imagery analysts into reporting that a bridge crossing over a particular important river would be worth bombing.
“So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” he said.
First described in 2014, GANs represent a big evolution in the way neural networks learn to see and recognize objects and even detect truth from fiction.
We are sure to see more of this pixel related hocus pocus as AI becomes more advanced and able to completely disrupt what we think our eyes and computers are seeing.
When it comes to deep fake videos of people, biometric indicators like pulse and speech can defeat the fake effect. But a faked landscape isn’t vulnerable to the same techniques.
The article goes deeper n to what actually happens while trying to deceive another country’s spy satellites.
read more at defenseone.com
Leave A Comment