Autonomous Vehicles Can Be Racist Due to Compromised Datasets
A professor at Yale has written a scathing opinion piece about the programmed racism in movies and possibly in some autonomous vehicles. It may sound far-fetched but Theodore Kim has been writing algorithms for Hollywood for 20 years. He even won an Oscar. His insights appeared on latimes.com this week.
The way AI is written and trained is a big problem for Kim, who says when you don’t train an algorithm with what’s in the real world, it won’t recognize the real world. In fact, he says algorithms have been shown to be written by and for white people.
Kim writes: Modern AI systems are “trained” on massive datasets of photographs and video footage from various sources, and use that training to determine appropriate behavior. But, if the footage doesn’t include lots of examples of specific behaviors, like how to slow down near emergency vehicles, the AI will not learn the appropriate behaviors. Thus, they crash into ambulances.
Tesla is facing a federal probe due to several accidents involving vehicles with flashing red lights. Kim says it is due to the AI being poorly trained.
Kim has been clear with his opinion about how some AI is racist and how it is still being used in today’s world.
“Using these algorithms to train AIs is extremely dangerous, because they were specifically designed to depict white humans.”
All the sophisticated physics, computer science, and statistics that undergird this software were designed to realistically depict the diffuse glow of pale, white skin and the smooth glints in long, straight hair. In contrast, computer graphics researchers have not systematically investigated the shine and gloss that characterizes dark and Black skin, or the characteristics of Afro-textured hair. As a result, the physics of these visual phenomena are not encoded in certain Hollywood algorithms.
If we are being asked to put our safety in the hands of an AI algorithm that is driving us on the nation’s highways, then it really should be able to recognize emergency vehicles and flashing lights. That seems obvious. But what about recognizing any black individuals that may be EMTs and standing beside that emergency vehicle?
“Synthetic training data are a convenient shortcut when real-world collection is too expensive. But AI practitioners should be asking themselves: Given the possible consequences, is it worth it?”
If the answer is no, they should be pushing to do things the hard way: by collecting real-world data.
Tesla, as well as Hollywood studios, should read Kim’s piece. To still be using some of these racist or ignorant algorithms is really a sad commentary on our society.
read more at latimes.com