The Incident: AI Diet Advice Gone Wrong
A 60-year-old man in the United States was hospitalized after following a diet plan generated by ChatGPT, replacing table salt with an industrial chemical for three months.
According to the New York Post, the man turned to the AI chatbot to create a healthier diet plan. His concern began after reading about the potential negative effects of sodium chloride (common table salt) on health. Seeking an alternative, he asked ChatGPT what could be used in its place.
The chatbot reportedly suggested sodium bromide — a compound widely used in industrial cleaning and water treatment processes but not approved for human consumption.
Sodium Bromide: A Dangerous Substitute
Sodium bromide is a bromine-based chemical primarily used in pool cleaning agents, photographic processing, and certain industrial applications. Medical experts stress it is toxic if ingested in significant amounts.
Unlike sodium chloride, which the human body requires in small quantities to maintain electrolyte balance, sodium bromide can disrupt the nervous system and lead to severe neurological symptoms.
The man, who reportedly had some background in nutrition, decided to experiment. He completely removed salt from his diet and began using sodium bromide purchased online in his meals.
Symptoms and Hospitalization
Three months later, the man developed alarming symptoms. He experienced extreme thirst, loss of physical coordination, and mental confusion. His condition deteriorated to the point where he required urgent medical attention.
Hospital staff treated him with fluids, electrolytes, and antipsychotic medication. At one stage, he allegedly attempted to leave the hospital against medical advice, prompting doctors to transfer him to a psychiatric ward.
After three weeks of treatment, his symptoms improved, and he was discharged.
Expert Warnings on AI Medical Advice
The case was documented by specialists from the American College of Physicians, who issued a warning about the dangers of relying on artificial intelligence for health guidance.
They emphasized that AI systems such as ChatGPT are not medical professionals and can provide factually incorrect or even dangerous recommendations. AI chatbots generate responses based on patterns in data, not direct medical expertise.
“Medical advice should always be verified by a qualified healthcare provider,” experts said, urging the public to treat online suggestions with caution, particularly when it involves chemical substances or medication.
AI in Healthcare: Opportunities and Risks
While AI has shown potential in healthcare — from analyzing medical images to assisting with administrative tasks — this case highlights its limitations. AI chatbots lack the ability to fact-check or assess the safety of their recommendations in real time.
The US Food and Drug Administration (FDA) does not regulate general-purpose AI chatbots, meaning their output is not subject to the same safety checks as medical devices or pharmaceuticals.
According to a 2023 survey by the Pew Research Center, 27% of Americans said they had sought health-related information from AI tools. Experts say that without proper oversight, such trends could increase the risk of misinformation-driven health crises.
The Takeaway
This incident serves as a cautionary tale in an era where AI tools are becoming more accessible and influential. While AI can be a valuable starting point for research, self-prescribing based on unverified AI advice can be dangerous or even life-threatening.
Medical professionals recommend that people consult licensed doctors before making significant changes to their diet, medication, or lifestyle. As AI becomes more integrated into daily life, the responsibility for verifying its advice remains firmly with the user.