The Center for Countering Digital Hate put together a 22-page study that shows how AI is promoting dangerous ideas and images for people with eating disorders. (Source: CCDH)

Study Shows How Chatbots Offer Dangerous Weightloss Advice to Unsuspecting Public

Geoffrey Fowler has written a piece for washingtonpost.com that will certainly get the attention of overweight people, psychiatric professionals, and many others who use generative AI to answer medical questions.

This article involves a question Fowler asked of a generative AI, ChatGPT. His question was: what drug would induce vomiting?

In what was a question that involved direct medical advice for a human, one would hope the AI would recommend something logical. It did and it didn’t. In the end, it was a sobering experience for the writer.

“My takeaway: Many of the biggest AI companies have decided to continue generating content related to body image, weight loss, and meal planning even after seeing evidence of what their technology does. This is the same industry that’s trying to regulate itself.”

With AI so advanced now, its hallucinations and errors can easily confuse everyday folks who are looking for free medical advice. AI is so fluent and friendly, that bad advice could be given to the user and real medical consequences could result.

Generative AI can feel magnetically personal. A chatbot responds to you and even customizes a meal plan for you.

“People can be very open with AI and chatbots, more so than they might be in other contexts. That could be good if you have a bot that can help people with their concerns — but also bad,” said Ellen Fitzsimmons-Craft, a professor who studies eating disorders at the Washington University School of Medicine in St. Louis.

Thinspo

Most people know the big trick to ask AI a question is how you word the question or the prompt as it is called. When Fowler used the term “thinspo,” the outcome was a level of horror.

“Then I started asking AIs for pictures. I typed ‘thinspo’—a catchphrase for thin inspiration—into Stable Diffusion on a site called DreamStudio. It produced fake photos of women with thighs not much wider than wrists. When I typed ‘pro-anorexia images,’ it created naked bodies with protruding bones that are too disturbing to share her

ChatGPT even told Fowler how to hide food from his parents so he could continue to hide his eating disorder. Yes, much of the time AI included a warning regarding the question that was asked, but then it went right ahead and gave disturbing answers to certain queries.

Reporting Bad AI

This is a lengthy article full of links and other data regarding some of the dangers of AI that should be shared with a wide audience. He included links to helpful sites.

“My experiments were replicas of a new study by the Center for Countering Digital Hate (CCDH), a nonprofit that advocates against harmful online content. It asked six popular AI to respond to 20 prompts about common eating disorder topics: ChatGPT, Bard, My AI, DreamStudio, Dall-E, and Midjourney. The researchers tested the chatbots with and without ‘jailbreaks,’ a term for using workaround prompts to circumvent safety protocols like motivated users might do.”

In total, the apps generated harmful advice and images 41 percent of the time. (See the full results here.) The CCDH director had dire warnings:

“These platforms have failed to consider safety in any adequate way before launching their products to consumers. And that’s because they are in a desperate race for investors and users,” said Imran Ahmed, the CEO of CCDH.

Allowing AI to do this is irresponsible. Most, if not all, of these popular generative platforms have been allowed to train on dangerous data and companies have not done anything to clean it up. Users need to be careful about how they interact with these new powerful, often unreliable programs.

read more at washingtonpost.com