Dartmouth University researchers trained AI to look for emotional issues by scanning Reddit.

Algorithm Helps Scientists to Find Emotional Fingerprints in Reddit Posts

Another group of researchers has developed an AI that looks for emotional signals within internet content. Not just the ‘trigger’ words that may be typed, but this algorithm looks for patterns in a speech or text that may help diagnose various mental illnesses.

From an article written by Ryan Morrison for dailymail.com.uk comes news that these scientists ran billions of texts taken off of the popular social platform called Reddit.

A team of computer scientists from Dartmouth College in Hanover, New Hampshire, set about training an AI model to analyze social media texts.

It is part of an emerging wave of screening tools that use computers to analyze social media posts and gain insight into people’s mental states.

While there have been other algorithms that looked specifically at the text of a message, this Dartmouth AI looks for intent. People tend to avoid the subject of mental health. From the stigma associated with seeing a counselor to the outright cost, there are roadblocks for some people who actually want and need some help. There is also a tendency to minimize signs of mental disorders or conflate them with stress, according to Xiaobo Guo, co-author of the new study.

So the team turned to Reddit to begin the experiment. And the researchers saw that the program got better as it went along.

“Social media offers an easy way to tap into people’s behaviors,” Guo said.

They trained their AI model to label the emotions expressed in users’ posts and map the emotional transitions between different posts.

Emotion-Based Modeling

Different emotional disorders have their own signature patterns of emotional transitions, the team explained. By creating an emotional “fingerprint” for a user and comparing it to established signatures of emotional disorders, the model can detect them.

Other models are built around scrutinizing and relying on the content of the text. While the models show high performance, they can also be misleading.

“For instance, if a model learns to correlate ‘COVID’ with ‘sadness’ or ‘anxiety,’ ” Vosoughi explains, it will naturally assume that a scientist studying and posting (quite dispassionately) about COVID-19 is suffering from depression or anxiety. “On the other hand, the new model only zeroes in on the emotion and learns nothing about the particular topic or event described in the posts.”

Morrison”s article goes a little deeper into the scope of the Dartmouth project and shares some information about the difference between these types of AI programs that are meant to assist in the diagnosis and treatment of mental illness. The World Health Organization says one in four will suffer some form of mental illness.

Perhaps the development of this type of AI can actually assist people in getting the help that they need.

The findings have been published in a preprint on ArXiv.

read more at dailymail.co.uk