AI chatbots “pose significant risks for people susceptible to eating disorders,” researchers say warned on Monday. They report that tools from companies like Google and OpenAI provide diet advice, tips on how to hide disorders, and AI-generated “thinspiration.”
Researchers at Stanford and the Center for Democracy and Technology have identified many ways in which publicly available AI chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Mistral’s Le Chat, can influence people prone to eating disorders, many of which are a consequence of features intentionally introduced to escalate engagement.
In the most extreme cases, chatbots can be busy participants in helping to conceal or maintain eating disorders. Researchers say Gemini offered makeup tips to hide weight loss and ideas on how to imitation eating, while ChatGPT offered advice on how to hide repeated vomiting. Other AI tools are used to create AI-generated “thinspiration,” content that inspires or pressures someone to conform to certain body standards, often through extreme measures. The ability to instantly create hyper-personalized images makes the resulting content “seem more relevant and achievable,” researchers say.
Flattery, a vice that AI companies themselves recognize as common, is unsurprisingly a problem in eating disorders as well. It contributes to undermining self-esteem, reinforcing negative emotions, and promoting harmful self-comparisons. Chatbots are also biased and likely perpetuate the misconception that eating disorders “only affect thin, white, cisgender women,” the report says, which can make it more challenging for people to recognize symptoms and seek treatment.
Scientists warn that existing guardrails in artificial intelligence tools cannot capture the nuances of eating disorders such as anorexia, bulimia and binge eating. “They tend to miss subtle but clinically important clues that trained professionals rely on, leaving many risks unaddressed.”
However, researchers also found that many clinicians and caregivers appear unaware of how generative AI tools impact people susceptible to eating disorders. They urged clinicians to “become familiar with popular AI tools and platforms,” test their weaknesses, and talk honestly with patients about how they apply them.
