Apple bought the Emotion AI company called Emotient in 2016, and appears to be integrating its algorithms into all Apple products. (Source: Emotient)

Apple’s Use of AI for Tracking Emotion Represents the Latest Invasive Technology

Are you depressed? Or are you a little stressed? Perhaps you haven’t been feeling well. If you use Apple products, their products may know that before you do.

The Apple corporation has begun testing algorithms to track your feelings, even though you probably have no idea they’re doing it.

Apple’s use of emotion AI is alarming, according to an article written by Ruth Reader for fastcompany.com. This intrusion into privacy is designed to prevent suicide or violence by monitoring words, texts, even how many errors people make while typing.

A current study between UCLA and Apple shows that the iPhone maker is using facial recognition, patterns of speech, and an array of other passive behavior tracking to detect depression. The report, from Rolfe Winkler of The Wall Street Journal, raises concerns about the company’s foray into a field of computing called emotion AI, which some scientists say rests on faulty assumptions.

Android users can relax. Or can they? It turns out there are lots of companies using emotional AI. Major companies are using emotional AI in their recruitment process.

Dunkin’ Donuts, Unilever, Carnival Cruise Lines, and IBM want to understand a candidate’s personality. This kind of technology is also being used both experimentally and commercially within cars to detect drowsy drivers, in prison populations to detect stress, and, during the pandemic, in digital classrooms to understand whether online students were struggling with their lessons.

As Seeflection.com reported recently the drivers for Amazon delivery were unhappy with the cameras watching their every move. This seems to be even more intrusive. However, there is a big pushback against emotion-tracking AI.

Is Emotion AI Too Flawed?

The technology has faced criticism, not only for collecting vast amounts of sensitive data but also because it’s not very good.

“In general the products that we’ve seen that do emotion detection or mental health [detection] have not proven to be super accurate,” says Hayley Tsukayama, a legislative activist at the Electronic Frontier Foundation.

In a 2019 paper published in the academic journal Psychological Science in the Public Interest, a group of researchers laid out the challenges with emotion AI.

“Efforts to simply ‘read out’ people’s internal states from an analysis of their facial movements alone, without considering various aspects of context, are at best incomplete and at worst entirely lack validity, no matter how sophisticated the computational algorithms.”

The idea that this technology is used to better society by addressing problems that their algorithms pick up is dicey at best. This is the kind of AI that Elon Musk warned us about. The corporate collection of personal information and data invades the privacy of consumers, both by those corporations and other companies they choose to share that information with.

Read more at fastcompany.com