X

Popular AI Tools Can Hurt Your Mental Health, New Study Finds

According to a new study, AI tools generate harmful content that can trigger eating disorders and other mental health conditions.

Taylor Leamey Senior Writer
Taylor Leamey writes about all things wellness, specializing in mental health, sleep and nutrition coverage. She has invested hundreds of hours into studying and researching sleep and holds a Certified Sleep Science Coach certification from the Spencer Institute. Not to mention the years she spent studying mental health fundamentals while earning her bachelor's degrees in both Psychology and Sociology. She is also a Certified Stress Management Coach.
Expertise Bachelor of Science, Psychology and Sociology Credentials
  • Certified Sleep Science Coach, Certified Stress Management Coach
Taylor Leamey
7 min read
Woman seen through a mass of binary numbers
Francesco Carta fotografo/Getty Images

Trigger warning: This story discusses eating disorders and disordered eating culture. If you or someone you love are living with an eating disorder, contact the National Eating Disorder Association for resources that can help. In the event of a crisis, dial 988 or text "NEDA" to 741741 to connect with the Crisis Text Line.

I'm the first to admit that the future of mental health is technology. From online therapy to breakthroughs in VR-based treatment, technology has done a lot of good for breaking the stigma and bringing access to those who previously didn't have it. 

However, treading lightly is essential with generative AI tools. According to recent research from the Center for Countering Digital Hate, popular AI tools have been providing users with harmful content surrounding eating disorders around 41% of the time. This has the potential to encourage or exacerbate eating disorder symptoms.   

"What we're seeing is a rush to apply it to mental health. It comes from a good place. We want people to have access to care, and we want people to get the service they need," says Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center.

Mental health is our minds, something so uniquely human that introducing a non-human to offer solutions to people in their most vulnerable state feels icky at best and potentially dangerous at worst. 

"If we go too fast, we're going to cause harm. There's a reason we have approvals processes and rules in place. It's not just to make everything slow. It's to make sure we harness this for good," Torous adds.  

Generative AI chatbots are promoting eating disorders

CCDH's study investigated text- and image-based AI tools with set prompts to assess their responses. Let's start with the text-based tools. ChatGPT, My AI from Snapchat and Google's Bard were tested with a set of prompts that included phrases like "heroin chic" or "thinspiration." The text AI tools provided harmful content promoting eating disorders for 23% of the prompts. 

The image-based AI tools assessed were OpenAI's Dall-E, Midjourney and Stability AI's DreamStudio. When each was given 20 test prompts with phrases like "thigh gap goals" or "anorexia inspiration," 32% of returned images contained damaging body image-related issues. 

Yes, these prompts had to be entered to get these negative responses. However, it's not as easy as saying people shouldn't seek this information. Some eating disorder online communities have been known to turn toxic, where members encourage others to engage in disordered eating behaviors and celebrate these habits. 

AI is making things worse. According to CCDH's research that also studied an eating disorder forum, generative AI tools are used to share unhealthy body images and create damaging diet plans. This forum has 500,000 users. 

Note that there are healthy and meaningful communities that don't exhibit these tendencies available. 

Man using an AI chatbot on his phone.
Tippapatt/Getty Images

Current AI safeguards aren't enough

AI is a hot-button topic, and companies are running to get their piece so they can be a part of the new wave of technology. However, rushing out products that aren't adequately tested has proven to be damaging to vulnerable populations.  

Each text-based AI tool in this study had a disclaimer that advised users to seek medical help. However, the present safeguards to protect people are easily bypassed. CCDH researchers also used prompts that included "jailbreaks" for AI. Jailbreaks are techniques to avoid the safety features of AI tools by using words or phrases that modify their behavior. When using jailbreaks, 61% of AI responses were harmful. 

AI tools have been under fire for "hallucinating" or providing information that seems true but it's not. AI doesn't think. It collects the information on the internet and reproduces it. AI doesn't know if the information is accurate. However, that's not the only concern with AI and eating disorders. AI is spreading misleading health information and perpetuating stereotypes in the certain eating disorder community. 

These AI tools aren't just getting information from medical sources. Remember how I mentioned that eating-disordered communities have been known to become breeding grounds for unhealthy behavior and competitiveness? AI tools can pull data from those spaces too. 

Say you ask one of the popular AI tools how to lose weight. Instead of providing medically approved information, there's a potential you could get back a disordered eating plan that could worsen an eating disorder or push someone toward one. 

AI has a long way to go for privacy 

This research is investigating eating disorders. However, any mental health condition could be harmed by AI. Anyone who is seeking access to information can get damaging responses.

AI interfaces have a knack for building trust, allowing people to share more information than they typically would when looking for an answer on the internet. Consider how often you share personal information while you're Googling something. AI gives you seemingly correct information without having to talk to someone. No one else knows what you're asking, right? Wrong. 

"Users need to be cautious about asking for medical or mental health advice because, unlike a patient-doctor relationship where information is confidential, the information they share is not confidential, goes to company servers and can be shared with third parties for targeted advertising or other purposes," Dr. Darlene King, chair of the Committee on Mental Health IT,  American Psychiatric Association told CNET in an email. 

There are no protections in place needed to share medical information. In the case of mental health, there's the potential to receive triggering or unwanted ads due to the information you shared with an AI chatbot. 

Should we ever use AI in mental health? 

In theory, AI chatbots could be a good resource for people to interact with and receive helpful content on building healthy coping mechanisms and habits. However, even with the best intentions, AI can go awry. This was the case with The National Eating Disorders Association's chatbot, Tessa, which is now suspended due to problematic recommendations for the community.

"We're seeing these things move too fast. It doesn't mean we shouldn't do it," Torous told CNET. "It's fascinating, it's important and it's exciting. But being optimistic about the long-term future doesn't mean we have to put patients at risk today."

That said, Torous and King both point to use cases for AI tools. All of these depend on future regulations that weigh out the risk and benefits. Currently, we're in a marketing free-for-all which means no one really knows what they're using, what the tool is trained on and what potential biases it has. Regulations and standards are required if the medical field ever hopes to integrate AI. 

Pregnant woman using her computer at her desk.
Eva-Katalin/Getty Images

Education

In the same way that Wikipedia is where many people go for information, AI tools could be a future source of patient education. Assuming, of course, there are strict source lists that medical institutes approve. 

All new technology expands access to care. One of the most basic ways AI could help people is by allowing them to learn and familiarize themselves with their condition, allowing them to find triggers and develop coping strategies.

King also suggests AI could aid future medical education and training. However, it's far from being ready for clinical settings because of the data pipeline of AI tools. 

"With ChatGPT, for example, 16% of the pre-trained text often comes from books and news articles, and 84% of the text comes from webpages. The webpage data includes high-quality text but also low-quality text like spam mail and social media content," King told CNET over email. 

"Knowing the source of information provides insight into not only the accuracy of the information but also what biases may exist. Bias can originate from data sets and become magnified through the machine learning development pipeline, leading to bias-related harms," King said. 

Documentation

An article published in JAMA Health Forumsuggests another use case for AI in mental health: documentation, a known source of burnout for physicians and nurses in the field. Using AI for documentation will supplement the efficiency of clinicians. 

"Eventually, having AI ethically and professionally help write documentation is a great use of it. It could also then help support staff and perhaps reduce healthcare costs by reducing the administrative burden," Torous said. 

There is potential for AI to be applied to office work like appointment scheduling and billing. But we're not there yet. The American Psychiatric Association recently put out an advisory informing physicians not to use ChatGPT for patient information, as it doesn't have the proper privacy features. 

Too long; didn't read?

With any new technology, in this case, generative AI, it's essential to ensure it's regulated and used responsibly. In its current state, AI isn't ready to take on the responsibility of interacting with people at their most vulnerable points, nor does it have the privacy features to handle patient data. Including a disclaimer before producing harmful content doesn't mitigate the damage. 

Torous is optimistic about the future as long as we do it right. "It's exciting that mental health gets new tools. We have an obligation to use them carefully," he said. "Saying we're going to do experiments and test it doesn't mean we are delaying progress; we're stopping harm."

There's potential if we don't let technology advance past necessary ethical safeguards.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.

The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have about a medical condition or health objectives.