Cynthia Dwork, a Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences and a distinguished scientist at Microsoft Research, focuses on applying computer science theory to societal problems in her research. She is one of the numerous women mentioned in the timeline. (Source: Harvard Radcliffe Institute for Advanced Study where she was a Fellow in 2017-18)

Writer Compiles Timeline Showing Depth of Women’s Influence in AI Ethics

A story on the Women in AI Ethics page on outlines the outsized influence of women in pointing out ethical issues and pushing for change in companies creating AI products.

The founder of the page and writer of the article, Mia Shah-Dand, is the CEO of Lighthouse3, a consulting company that “helps organizations successfully navigate the disruptive waves of new technologies.”

Entitled, “The AI Ethics Revolution— A Timeline,” the article lists a significant number of contributions by women, starting with how women’s contributions were pivotal for the creation of AI:

 “…Ada Lovelace wrote the first computer program in the 1800s, Joan Clarke used her cryptology skills to help western allies during World War 2, and “(human) computers” like Katherine Johnson overcame racial segregation and used their mathematical genius to send the first American into space in the 1900s.”

The article further lists women’s accomplishments starting in 2014, when Cynthia Dwork co-authored the paper, The Algorithmic Foundations of Differential Privacy” all the way to 2023 when:

“Hilke Schellmann co-led a Guardian investigation of AI algorithms used by social media platforms and found many of these algorithms have a gender bias, and may have been censoring and suppressing the reach of photos of women’s bodies.”

The timeline is a compendium of women’s influence and even risk-taking to ensure that AI issues, particularly involving sexism and racism, are brought to light. Unfortunately, in some cases, women have been fired from major AI companies for doing so.

One of the most public examples of this was when in 2021, Google fired its ethical AI team co-lead Margaret Mitchell who along with Timnit Gebru had called for more diversity among Google’s research staff and expressed concern that the company was starting to censor research critical of its products. Gebru was fired in 2020 for talking about the paper she co-authored with Mitchell, Emily Bender and Angelina McMillan-Major on “possible risks associated with large machine learning models and suggested exploration of solutions to mitigate those risks.”

Another notable event was when Frances Haugen, a former data scientist at Facebook, acted as a whistleblower in testifying at a U.S. Senate Hearing that “Facebook’s algorithm amplified misinformation” and “it consistently chose to maximize its growth rather than implement safeguards on its platforms.”