Not every idea is a winner, even if it is operated by AI. (Source: Storyblocks stock image)

AI Incidence Database Highlights the Shame of Inventors, Companies Using Tech

It’s time to talk about what works and what doesn’t with AI, and then to submit examples to the AI Hall of Shame with the help of a new database. Mistakes are made constantly how companies improve products and services, artificial or otherwise.

Keeping track of mistakes is critical when it comes to new technology. Sean McGregor, a machine learning engineer who works at the voice processor startup Syntiant, was highlighted in a wired.com article for creating the tracking resource. He says there’s a good reason to have an AI Hall Of Shame. Here are a few examples of faulty, problematic or even dangerous AI-based technologies:

“The AI Incident Database launched late in 2020 and now contains 100 incidents, including #68, the security robot that flopped into a fountain, and #16, in which Google’s photo organizing service tagged Black people as ‘gorillas.’ Think of it as the AI Hall of Shame.”

The AI Incident Database is hosted by Partnership on AI, a nonprofit founded by large tech companies to research the downsides of the technology. McGregor started the “dishonor roll” because:

“AI allows machines to intervene more directly in people’s lives, but the culture of software engineering does not encourage safety.”

McGregor Has Dystopian Considerations

With AI making its way into the fabric of the world day-to-day existence, it is certainly logical to keep track of the less than stellar performances of some of our ideas.

“Often I’ll speak with my fellow engineers and they’ll have an idea that is quite smart, but you need to say ‘Have you thought about how you’re making a dystopia?’ McGregor says.”

He hopes the incident database can work as both a carrot and stick on tech companies, by providing a form of public accountability that encourages companies to stay off the list, while helping engineering teams craft AI deployments less likely to go wrong.

Here are a few of the ideas that made the list of failures:

The first entry in the database collects accusations that YouTube Kids displayed adult content, including sexually explicit language. The most recent, #100, concerns a glitch in a French welfare system that can incorrectly determine people owe the state money. In between, there are autonomous vehicle crashes, like Uber’s fatal incident in 2018, and wrongful arrests due to failures of automatic translation or facial recognition.

One of his favorite incidents is an AI blooper by a face-recognition-powered jaywalking-detection system in Ningbo, China, which incorrectly accused a woman whose face appeared in an ad on the side of a bus.

Not Everyone Agrees with Keeping Records

Georgetown students are working to create a companion database that includes details of an incident, such as whether the harm was intentional or not, and whether the problem algorithm acted autonomously or with human input. And with that comes the need to place blame and dole out consequences. And that’s when it starts to make people nervous.

Helen Toner, Director of Strategy at CSET, says that exercise is informing research on the potential risks of AI accidents. She also believes the database shows how it might be a good idea for lawmakers or regulators eyeing AI rules to consider mandating some form of incident reporting, similar to that for aviation.

EU and U.S. officials have shown a growing interest in regulating AI, but the technology is so varied and broadly applied that crafting clear rules that won’t be quickly outdated is a daunting task.

Daunting indeed. There is plenty of information to sift through in the piece.

And now you too can submit your favorite AI fails to McGregor’s AI Incident Database, and perhaps yours will make the list of the AI Hall Of Shame.

read more at wired.com