Odest Chadwicke Jenkins, an associate director of the Michigan Robotics Institute at the University of Michigan, told the New York Times that people of color needed to be represented, “in the research lab, in the classroom, and the development team, the executive board” to avoid built-in discrimination. (Source: University of Michigan website)

Racial Discrimination in High Tech Remains Potentially Lethal Issue in 2020

We found a particularly concerning story this week in the New York Times that we felt was important to share with our readers.

It is a piece written by David Berreby and it begins with detailing the use of a robot to kill a human in a Dallas police stand-off. That in itself was horrific, but the article goes on to show how bias is still being programmed into AI and AI-operated robots.

After the bombing, then-Dallas Police Chief David Brown said the decision was appropriate since the man they blew up had just shot and killed multiple people. Some robotics researchers were troubled.

“Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police officers in Dixmont, Maine, ended a shootout in a similar manner). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a black man.

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one such system told the Detroit police that it had matched photos of a suspected thief with the driver’s license photo of Robert Julian-Borchak Williams, a black man with no connection to the crime.)

There are A.I. systems enabling self-driving cars to detect pedestrians — last year Benjamin Wilson of Georgia Tech and his colleagues found that eight such systems were worse at recognizing people with darker skin tones than paler ones. Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the M.I.T. Media Lab, has encountered interactive robots at two different laboratories that failed to detect her. (For her work with such a robot at M.I.T., she wore a white mask in order to be seen.)

Clearly, there’s a systemic problem that programmers and government officials must address.

Solutions Available

The long-term solution for such lapses is “having more folks that look like the United States population at the table when technology is designed,” said Chris S. Crawford, a professor at the University of Alabama who works on direct brain-to-robot controls. Algorithms trained mostly on white male faces (by mostly white male developers who don’t notice the absence of other kinds of people in the process) are better at recognizing white males than other people.

“I personally was in Silicon Valley when some of these technologies were being developed,” he said. More than once, he added, “I would sit down and they would test it on me, and it wouldn’t work. And I was like, You know why it’s not working, right?”

Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. So it was striking that many roboticists signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed themselves to actions aimed at making the creation and usage of robots less unjust.

Defense Attorney Rachael Rollins joins Joy Buolamwini, founder of the Algorithmic Justice League (AJL), along with law enforcement officials, experts, lawmakers, advocates and a full room of supporters to #PressPause on face surveillance in testifying before the Massachusetts legislature. (Source: AJL Instagram page)

Along with the change in the administrations and with the peaceful protests this year across the country, there is a real feeling of society awakening to social issues once again. Almost like it was in the 1960s and 70s. And now would be the perfect time for programmers and robot researchers to get on board and stop allowing the Age of AI to become more of the society that struggled through a Civil War, two World Wars and a long difficult era to bring equal rights to the voting booth.

“I think the protests in the street have really made an impact,” said Odest Chadwicke Jenkins, a roboticist and A.I. researcher at the University of Michigan. At a conference earlier this year, Dr. Jenkins, who works on robots that can assist and collaborate with people, framed his talk as an apology to Mr. Williams. Although Dr. Jenkins doesn’t work in face-recognition algorithms, he felt responsible for the A.I. field’s general failure to make systems that are accurate for everyone.

“This summer was different than any other than I’ve seen before,” he said. “Colleagues I know and respect, this was maybe the first time I’ve heard them talk about systemic racism in these terms. So that has been very heartening.”

It is not too late to make this a better age to live in and protect the rights of people of color. Groups like the Algorithmic Justice League are working towards that goal and as awareness spreads, researchers and high tech corporations should be willing to get onboard.

read more at nytimes.com