Algorithms Reflect Deep Societal Bias in Unequal Decisions
Amazon literally gave up on salvaging an algorithm that discriminated against women in hiring. Despite tweaking it and attempting multiple times to change the result, the data that was the foundation of the AI’s decisions provided a faulty basis for giving equal weight to female and male candidates.
That’s just one of many examples of how AI programs are discriminating against people according to race, sex and other factors. It’s gotten so obvious that three U.S. representatives introduced the Algorithmic Accountability Act, which addresses bias in facial recognition, self-driving cars, customer service, marketing and content moderation, according to an essay in Fortune magazine.
Recently, a study by two universities and a nonprofit found that Facebook’s automated advertising system discriminates against women and minorities in its advertising, by targeting select groups, according to a Wired.com story. Researchers at Northeastern University, the University of Southern California, and Upturn found that an algorithm for ad delivery on Facebook demonstrated bias.
“Even when companies choose to show their ads to inclusive audiences, the researchers wrote, Facebook sometimes delivers them ‘primarily to a skewed subgroup of the advertiser’s selected audience, an outcome that the advertiser may not have intended or be aware of.’ For example, job ads targeted to both men and women might still be seen by significantly more men.”
The story found that Facebook could be violating civil rights laws in its use of advertising discrimination. It’s just the latest example of the company’s racism, according to Pro Publica, which found that housing ads discriminated according to race, a violation of the 1968 Fair Housing Act.
“Last month, the social network settled five lawsuits from civil rights organizations that alleged companies could hide ads for jobs, housing, and credit from groups like women and older people,” the Wired story reported. “As part of the settlement, Facebook said it will no longer allow advertisers to target these ads based on age, gender, or zip code. But those fixes don’t address the issues the researchers of this new study found.”
A NewScientist.com article reviewed five examples of how AI has shown prejudice, giving critics plenty of ammunition against the companies deploying these discriminatory programs.
- COMPAS: The sentencing algorithm that predicts the likelihood of an offender committing more crimes predicted that black men would have a higher recidivism rate.
- PredPol: An algorithm to predict when and where crimes will take place, repeatedly identified neighborhoods with high minority rates, regardless of the rate of crime.
- Facial Recognition: Three of the AIs were tested at MIT and found to discriminate against black women, especially, as it was only able to correctly identify gender 35% of the time. For white men, it was 99% of the time.
- Google Search: In a search for U.S. CEO, women’s photos popped up only 11 percent of the time, even though they comprise 27 percent of the CEOs.
- Facebook ID Fails: Facebook falsely accused a Palestinian worker of being a terrorist because he posted a sign that was incorrectly read by the AI as “Attack Them,” instead of “Good Morning” in Arabic.
The Fortune.com opinion piece summed up the problem:
“This is where A.I. presents its greatest risk: in softer tasks that may have objective outcomes but incorporate what we would normally call judgment. Some such tasks exercise much influence over people’s lives. Granting a mortgage, admitting a child to a university, awarding a felon parole, or deciding whether children should be separated from their birth parents due to suspicions of abuse fall into this category. Such judgments are highly susceptible to human biases—but they are biases that only humans themselves have the ability to detect.”
The proposed bill would require companies to audit their machine-learning systems for bias and discrimination and address any issues. It would also require those companies to review all processes involving sensitive data—such as personally identifiable, biometric, and genetic information—for privacy and security risks. The U.S. Federal Trade Commission, the agency in charge of consumer protection and antitrust regulation, would enforce the law, if it passes.
read more at technologyreview.com
Leave A Comment