Facebook Ads Target Men for Certain Jobs, Women for Others in Spite of Discrimination Laws
According to a story on MIT Technology Review, Facebook continues to use faulty ad algorithms that target men over women for jobs that require the same qualifications for both genders. Women are simply excluded because of their gender, a violation of U.S. equal opportunity law.
Researchers at the University of Southern California (USC) posed as employers buying two sets of ads for jobs with identical qualifications but different real-world demographics. For example, they advertised for two delivery driver jobs, one for Domino’s (pizza delivery) and one for Instacart (grocery delivery). Currently, more men than women drive for Domino’s, and vice versa for Instacart.
“Though no audience was specified on the basis of demographic information, a feature Facebook disabled for housing, credit, and job ads in March of 2019 after settling several lawsuits, algorithms still showed the ads to statistically distinct demographic groups. The Domino’s ad was shown to more men than women, and the Instacart ad was shown to more women than men.”
In addition, other job ads for software engineers for Nvidia targeted males; Netflix ads were biased towards females; sales associates ads for cars tended to be shown to males, and jewelry sales ads targeted females. Facebook won’t explain why the ads are shown to one gender more than another, but the report surmises that the algorithms pick up on current demographic data of employees in the jobs.
“Facebook reproduces those skews when it delivers ads even though there’s no qualification justification,” says Aleksandra Korolova, an assistant professor at USC, who coauthored the study with her colleague John Heidemann and their PhD advisee Basileal Imana.
Even worse, Facebook has been aware of its flaws in housing ads that discriminate against minorities and job ads that discriminate against genders since 2019, but still haven’t corrected them.
“In March, MIT Technology Review published the results of a nine-month investigation into the company’s Responsible AI team, which found that the team, first formed in 2018, had neglected to work on issues like algorithmic amplification of misinformation and polarization because of its blinkered focus on AI bias.”
read more at technologyreview.com
Leave A Comment