New York City’s new law regulating AI hiring elicited criticism from both civil rights organizations and the companies that will need to implement it. (Source: Adobe Stock)

New York City Becomes First U.S. Locale to Regulate Hiring through AI Programs

According to a story on MIT’s technologyreview.com, public interest groups and civil rights groups are complaining that a brand new AI hiring law in New York City doesn’t go far enough.

The law, enacted on Wednesday, addresses algorithms that discriminate against people of color and women in hiring. Technologyreview.com summed it up this way:

“NYC’s Automated Employment Decision Tool law, which came into force on Wednesday, says that employers who use AI in hiring have to tell candidates they are doing so. They will also have to submit to annual independent audits to prove that their systems are not racist or sexist. Candidates will be able to request information from potential employers about what data is collected and analyzed by the technology. Violations will result in fines of up to $1,500.”

It might sound like a law that civil rights groups would applaud, but several interested parties said that it didn’t go far enough. Tech companies, on the other hand, argued that it will complicate hiring and that its requirement of a third-party audit isn’t “feasible” because there are so few auditors available. Companies that have decried the law include Adobe, Microsoft and IBM.

“Groups like the Center for Democracy & Technology, the Surveillance Technology Oversight Project (S.T.O.P.), the NAACP Legal Defense and Educational Fund, and the New York Civil Liberties Union argue that the law is ‘underinclusive’ and risks leaving out many uses of automated systems in hiring, including systems in which AI is used to screen thousands of candidates.

Albert Fox Cahn, executive director of S.T.O.P., said there aren’t any auditing standards established and he’s concerned about the type of proprietary company information an auditor would need to access.

According to the story, the audits would have to “evaluate whether the output of an AI system is biased against a group of people, using a metric called an ‘impact ratio’ that determines whether the tech’s ‘selection rate’ varies across different groups.” The auditor wouldn’t have to know how an algorithm chooses candidates, which is an issue for civil rights groups.

read more at technologyreview.com