Facial recognition programs are being banned in Minneapolis, which follows Portland, San Francisco and Boston in rejecting the technology. (Source: WikiMedia)

Minneapolis Latest City to Sound the Alarm over Facial Recognition Issues

Arrests and charges in the aftermath of the Jan. 6 Capitol Hill insurrection are underscoring the power of facial recognition, though more cities and groups are working to restrict the technology. The potential for abuse, however, has led more progressive cities to decide against letting law enforcement use it.

An article from axios.com regarding the use of facial recognition programs outlines the history of its use, how it works and why its failures in identifying faces that are not white has led several cities to ban it. Writer Brian Walsh presents a quick take on why the controversy is brewing across the nation:

Why it matters: With dozens of companies selling the ability to identify people from pictures of their faces — and no clear federal regulation governing the process — facial recognition is seeping into the U.S., raising major questions about ethics and effectiveness.

Recently the Minneapolis City Council voted to ban its police department from using facial recognition technology. Other cities that have restricted the technology include Portland, San Francisco and Boston.

While some localities are rethinking its use, facial recognition technology is gaining favor in many cities, “a trend accelerated by efforts to identify those involved in the Capitol Hill insurrection.”

Clearview AI, one of the leading firms selling facial recognition to police, reported a 26% jump in usage from law enforcement agencies the day after the riot. Cybersecurity researchers employed facial recognition to identify a retired Air Force officer recorded in the Capitol that day, and after the attack, Instagram accounts named the lawbreakers.

By the numbers: A report by the Government Accountability Office found that between 2011 and 2019, law enforcement agencies performed 390,186 searches to find facial matches for images or video of more than 150,000 people.

The Black Lives Matter protests over the summer also led to a spike in use in facial recognition among law enforcement agencies, according to Chad Steelberg, the CEO of Veritone, an AI company:

“We consistently signed an agency a week, every single week.”

U.S. Customs and Border Protection used facial recognition on more than 23 million travelers in 2020, up from 19 million in 2019, according to a report released last week.

How It Works

In Veritone’s facial recognition system, crime scene footage is uploaded and compared to faces in a known offenders database — though as agencies begin to share information across jurisdictions, that possible database has been getting larger.

Veritone’s system returns possible matches with a confidence interval that police can use — together with other data, like whether someone has a violent record — to identify possible suspects. Problems arise when police accept the data at face value, so to speak, instead of comparing results and determining whether they’re accurate or not. One such case occurred in 2019 when a black man in New Jersey was misidentified as a crime suspect and was arrested and jailed in spite of clearly not being the person in the photo. He sat in jail for 10 days, despite having an ironclad alibi and without having his fingerprints compared with those in the database. The ACLU sued the police on his behalf.

A 2019 federal study found Asian and Black people were up to 100 times more likely to be misidentified than white men, depending on the accuracy of the facial recognition system. The major flaw in systems typically results from training them on a majority of white faces, but it tends to be less accurate in recognizing people of color in any case.

“Today’s facial recognition technology is fundamentally flawed and reinforces harmful biases,” FTC Commissioner Rohit Chopra said last month, following a settlement with a photo storing company that used millions of users’ images to create facial recognition technology it marketed to the security and air travel industries.

The Companies’ Argument

Facial recognition companies point out that humans are notoriously biased and prone to error — a 2014 study found 1 in 25 defendants sentenced to death in the U.S. are later shown to be innocent. Unlike people, they argue that their software is becoming better with more data and tweaks.

“There’s nothing inherently evil about the models and the bias,” says Steelberg. “You just have to surface that information so the end user is aware of it.”

Dozens of facial recognition companies, many of them start-ups, are operating with little to no regulation. Meanwhile, police departments are embracing their technology, some with reckless disregard of past failures of the systems.

This article should really open some eyes and begin to make people realize they need to be cautious on and offline when it comes to digital technology and how much we want to grant the use of it in our lives.

read more at axios.com