Study Suggests That AI-Enabled Surveillance May Render Superficial Disguises Obsolete
In a new technological benchmark with potentially worrisome real-world implications, researchers from the University of Cambridge, India’s National Institute of Technology, and the Indian Institute of Science have developed facial recognition technology that can positively identify subjects attempting to disguise their faces with bandanas, scarves, hats, and other disguises (paper here). The image recognition algorithm can identify 14 reference points on subject’s face, and is still effective at identifying the precise “fingerprint” of these structural points 56% percent of the time when subjects’ faces are covered.
While the algorithm is far from fool-proof –even glasses alone can hamper accuracy significantly– improvements in facial recognition and other biometric technologies fueled by AI advancements could be a final nail in the coffin for public privacy. While the technology in itself presents no threat and could be feasibly used in many applications, the growing technological ability –and power— to identify and track individuals presents ethical concerns that such pervasive and persistent surveillance will be abused by corporations, states, intelligence agencies, and others in the future.
ohai a machine learning system that can identify ~69% of protesters who are wearing caps AND scarfs to cover their face. h/t @jackclarksf pic.twitter.com/ct3NvBL2BW
— Zeynep Tufekci (@zeynep) September 4, 2017
Read more from Quartz:
The paper, accepted to appear in a computer vision conference workshop next month and detailed in Jack Clark’s ImportAI newsletter, shows that identifying people covering their faces is possible, but there’s a long way to go before it’s accurate enough to be relied upon. Researchers used a deep-learning algorithm—a flavor of artificial intelligence that detects patterns within massive amounts of data—to find specific points on a person’s face and analyze the distance between those points. When asked to compare a face concealed by a hat or scarf against photos of five people, the algorithm was able to correctly identify the person 56% of the time. If the face was also wearing glasses, that number dropped to 43%.
But those imperfect results don’t mean the paper should be ignored. The team, with members from the University of Cambridge, India’s National Institute of Technology, and the Indian Institute of Science, also released two datasets of disguised and undisguised faces for others to test and improve the technology. (Data has been shown to be a key component for driving progress in the field of AI; when deep-learning algorithms have more data to analyze, they can identify patterns in the data with greater accuracy.)
But faces aren’t the only way to identify a person—other AI research indicates that the way a person walks is almost like a fingerprint. Researchers achieved more than 99% accuracy when using a deep-learning algorithm to identify one person’s gait from 154 others.
The new research skips past generic fear-mongering about artificial intelligence to get more directly at the realistic implications of AI systems being developed and used by unchecked or authoritarian government powers. Facial-recognition technology that could bypass disguises, for example, would be immensely useful for identifying political dissidents or protestors.
As technology writer and New York Times opinion contributor Zeynep Tufekci tweeted, “Too many worry about what AI—as if some independent entity—will do to us. Too few people worry what *power* will do *with* AI.”