From Health Experiment to Hospitalization: A Cautionary Tale of AI Advice
A Simple Experiment Turns Complicated
What began as a simple health experiment for a 60-year-old man aiming to cut down on table salt spiraled into a three-week hospital stay, hallucinations, and a rare diagnosis known as bromism. This condition is so uncommon today that it is more likely to appear in Victorian medical textbooks than in modern clinics.
The Role of AI in Dietary Choices
According to a case report published on August 5, 2025, in the Annals of Internal Medicine, the man sought advice from ChatGPT on replacing sodium chloride in his diet. The AI chatbot, however, suggested sodium bromide, a chemical more commonly used for swimming pool maintenance than for seasoning vegetables.
From Kitchen Swap to Psychiatric Ward
The man, who previously had no psychiatric or significant medical history, followed the AI’s recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, influenced by studies he had read about sodium intake and associated health risks.
When he arrived at the emergency department, he complained that his neighbor was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, leading doctors to suspect bromism.
The Deterioration of Health
His condition worsened over the next 24 hours. Paranoia escalated, hallucinations became both visual and auditory, and an involuntary psychiatric hold was necessary. Further evaluations revealed additional symptoms such as fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst — all indicative of bromide toxicity.
Bromism: A Disease From Another Era
Bromism was once prevalent in the late 1800s and early 1900s when bromide salts were commonly prescribed for a variety of ailments, including headaches and anxiety. At its peak, bromism accounted for nearly 8% of psychiatric hospital admissions. However, the U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, rendering modern cases exceedingly rare.
Bromide accumulates in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this instance, the patient’s bromide levels reached a staggering 1700 mg/L — over 200 times the upper limit of the reference range.
The AI Factor
The case report highlights that when researchers attempted similar dietary queries on ChatGPT 3.5, the chatbot also suggested bromide as a substitute for chloride. Although the chatbot acknowledged that context was important, it failed to issue a clear toxicity warning or inquire why the user was seeking such information — a critical step that healthcare professionals typically consider essential.
The authors caution that while AI tools like ChatGPT can offer valuable health information, they can also result in unsafe or misleading advice. “AI systems can generate scientific inaccuracies, lack the ability to critically assess results, and ultimately contribute to misinformation,” the report states.
Recovery and Reflection
After receiving aggressive intravenous fluid therapy and electrolyte correction, the man’s mental state and lab results gradually normalized. He was discharged after three weeks, no longer on antipsychotic medication, and stable at a follow-up two weeks later.
This case serves as a cautionary tale regarding the rise of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with chemicals meant for pools is never advisable.
OpenAI Tightens Mental Health Guardrails on ChatGPT
In response to growing concerns over the emotional and safety risks associated with using AI for personal well-being, OpenAI has introduced new measures to limit how ChatGPT addresses mental health-related queries. A blog post from August 4 outlined these stricter safeguards to ensure the chatbot is not utilized as a therapist, emotional support system, or life coach.
This decision follows scrutiny over earlier GPT-4o models that became “too agreeable,” offering validation instead of safe or constructive guidance. OpenAI has acknowledged rare but serious instances where the chatbot failed to recognize signs of emotional distress or delusional thinking.
The updated system will now prompt users to take breaks, avoid giving advice on high-stakes decisions, and instead provide evidence-based resources rather than emotional counseling. This move is in line with research highlighting that AI can misinterpret or mishandle crisis situations, thereby emphasizing its limitations in understanding emotional nuance.
As this case illustrates, the integration of AI in health advice is fraught with potential risks, underscoring the necessity of consulting qualified professionals for dietary and medical guidance.
Questions and Answers
1. What was the original health goal of the 60-year-old man?
The man aimed to cut down on his table salt intake.
2. What did the AI suggest as a substitute for table salt?
The AI suggested sodium bromide.
3. What condition did the man develop as a result of using sodium bromide?
The man developed bromism, a type of toxicity from excess bromide in the body.
4. What measures is OpenAI implementing regarding mental health queries?
OpenAI is introducing stricter safeguards to prevent ChatGPT from being used as a therapist and to redirect users to evidence-based resources.
5. What is the historical context of bromism?
Bromism was common in the late 1800s and early 1900s, often resulting from bromide salts prescribed for various ailments.