AI Project Allows People to Explore How Bias Can Be Programmed

AI can’t think like a human, yet, but it can express the biases humans convey to it: it can be prejudiced based on appearance.

A new online tool called ImageNet Roulette allows people to take selfies and see how AI reacts to them. Created by Kate Crawford, head of AI Now, an organization that does work spotlighting ethical issues with A.I., and researcher Trevor Paglen, as part of an art installation currently featured at the Fondazione Prada Osservertario museum in Milan, the program was designed for the Training Humans exhibition. The traveling exhibit outlines the history of image recognition systems and their biases. According to a story in Fortune magazine:

“Some of the results have been spot-on, for instance labeling a platinum blonde Caucasian woman as someone whose hair is likely ‘artificially colored.’ One humorous example labeled U.K. Prime Minister Boris Johnson as a ‘demagogue.'”

AI’s biases have already been explored in studies of facial recognition systems that identified minority Congressmen as criminals and photos of dark-skinned people on social media as “wrongdoer, offender,” “convict,” and “first offender.” Since then, U.S. Representatives have proposed a bill to fight against AI discrimination. ImageNet Roulette gives AI 2,500 sexist and racist and other positive and derogatory terms to use in its descriptors.

“We want to shed light on what happens when technical systems are trained on problematic training data,” Crawford and Paglen say on their website. “AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong.”

The ImageNet database draws from a digital archive with 14 million images labeled using WordNet, a system of word classifications that was developed in the 1980s. The application is designed to show people how the system can fail to be fair, the BBC explains:

“ImageNet Roulette doesn’t strive for perfection, and is instead designed to spark a discussion about bias in A.I. However, it’s hardly the first time artificial intelligence has failed when it comes to showing its unfair bias.”

The story also points out that Google’s AI showed the same biases last year when it let people upload selfies to find works of art that mirrored them.

“African-Americans reported matching with stereotypical art depicting slaves. Asians were shown geishas as their look-alikes, and an Indian-American reporter was served a portrait of a Native American chief. A Google representative responded to the controversy with an apology and said the company is “committed to reducing unfair bias.”

Wired.com questioned the validity of many of the 2,395 labels that are part of the database, and whether they were truly all as negative as portrayed. But it found inherent biases related to the original database.

“Still, the problems with ImageNet illustrate how bias can propagate from mostly forgotten sources. In this case, the source starts in the mid-1980s, with a project at Princeton called WordNet,” Wired writes. “WordNet was an effort by psychologists and linguists to provide a ‘conceptual dictionary,’ where words were organized into hierarchies of related meaning. You might travel from animals to vertebrates to dogs to huskies, for example, or perhaps branch off along the way into cats and tabbies. The database goes beyond the pages of Merriam-Webster, including everything from obscure desserts to outdated slang. ‘A lot of of the terms that were considered socially appropriate then are totally inappropriate now,’ says Alexander Wong, a professor of computer science at the University of Waterloo.”

Biased AI tools are already being used by police departments, and at least in the case of the UK police, departments are re-evaluating them as tools that must be assessed for accuracy as they are being used. A report found that biases in the system reflected those in police, whose actions created more data on minorities than white people in flagging them as potential perpretrators.

“Young black men are more likely to be stopped and searched than young white men, and that’s purely down to human bias,” said one officer.

A story in technologyreview.com explains why bias in AI is so difficult to avoid, even if it doesn’t make assumptions about images. Historical context in collecting data and preparing data bakes in biases. In the case of Amazon’s AI hiring system discriminating against women, it says:

The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it. In Amazon’s case, when the engineers initially discovered that its tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like ‘women’s.’ They soon discovered that the revised system was still picking up on implicitly gendered words—verbs that were highly correlated with men over women, such as ‘executed’ and ‘captured’—and using that to make its decisions.

Correcting discrimination in AI will take vigilance among programmers and researchers, just as it does in society, said Andrew Selbst, a postdoc at the Data & Society Research Institute, who identified what he calls the “portability trap” in a research paper. He said that systems designed for use in multiple applications often ignore social context.

ImageNet Roulette’s take on the editor of this story, who happens to have some Italian heritage.