New Yorker Story Presents Array of Pressing AI Ethics Worries, as Groups Challenge Research
What’s the worst thing that could happen from a lack of ethical review and regulation of AI? We’re already finding out, according to a story in the New Yorker, written by Matthew Hutson.
With the headline, “Who Should Stop Unethical AI?,” the story lays out some of the ways AI could be used to undermine truth, minority groups and equality, as new algorithms proliferate, untethered by interference from the U.S. government or any agency with teeth to stop it.
While some self-regulating is going on, Hutson points out that it’s minimal. For instance, on June, 2019, at a major California AI conference called Computer Vision and Pattern Recognition, a project called Speech2Face, an algorithm that generated images of faces from recordings of speech. Later, Alex Hanna, a trans woman and sociologist at Google who studies A.I. ethics, objected to, “this awful transphobic shit.” A discussion on the reviewing and publishing process ensued. Several AI ethics experts agreed that it was wrong to promote the program.
“Subbarao Kambhampati, a computer scientist at Arizona State University and a past president of the Association for the Advancement of Artificial Intelligence, to find out what he thought of the debate. “When the ‘top tier’ AI conferences accept these types of studies,” he wrote back, “we have much less credibility in pushing back against nonsensical deployed applications such as ‘evaluating interview candidates from their facial features using AI technology’ or ‘recognizing terrorists, etc., from their mug shots’—both actual applications being peddled by commercial enterprises.”
Hutson noted that Katherine Heller, a computer scientist at Duke University and a Neurips co-chair for diversity and inclusion, said the sheer volume of papers—1,400—has led to a challenge in vetting them. But ethics hasn’t been a criterion used in the past. That’s changing.
At Neurips 2020—held remotely, this past December—papers faced rejection if the research posed a threat to society. “I don’t think one specific paper served as a tipping point,” Iason Gabriel, a philosopher at the research lab DeepMind and the leader of the conference’s ethics-review process, told me. “It just seemed very likely that if we didn’t have a process in place, something challenging of that kind would pass through the system this year, and we wouldn’t make progress as a field.”
The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (sigchi) 2016 started a research-ethics committee in 2016 to review papers submitted to sigchi conferences. Issues around the social impact of AI research have been growing since.
Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, said questions about possible impacts fall into four categories:
- AI that could be weaponized against populations—facial recognition, location tracking, surveillance.
- Technologies, such as Speech2Face, put people into discriminatory categories, such as gender or sexual orientation.
- Automated-weapons research.
- Tools “to create alternate sets of reality”—fake news, voices, or images.
Two papers, “Charge-Based Prison Term Prediction with Deep Gating Network” and “Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation”—were among those many have viewed as ethically charged.
The first presents an algorithm for determining prison sentences; the other describes software that automates the writing of comments about news articles. “A paper by Beijing researchers presents a new machine learning technique whose main uses seem to be trolling and disinformation,” one researcher tweeted, about the comment-generation work.
The comment-generation paper had four authors, two from Microsoft and two from a Chinese state lab. The paper describes the software, but the authors did not release its code after concerns were raised. The research lab OpenAI also withheld its text-synthesis software, which could be used to generate fake news or comments.
On Reddit, in June, 2019, a user linked to an article titled “Facial Feature Discovery for Ethnicity Recognition,” published in Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. The machine-learning model it described successfully predicted Chinese Uyghur, Tibetan, and Korean ethnicity based on photographs of faces. “It feels very dystopian to read a professionally written ML paper that explains how to estimate ethnicity from facial images, given the subtext of China putting people of the same ethnicity as the training set into concentration camps,” the commenter wrote.
The story notes that groups like the usenix Security Symposium and I.E.E.E. Symposium on Security and Privacy require authors to discuss in detail the steps they’ve taken, or plan to take, to address any vulnerabilities that they’ve exposed, but self-regulation may not be enough, the lengthy story concludes.
read more at newyorker.com
Leave A Comment