Man Poisoned Himself After Following ChatGPT’s Medicine Advice

Date:

A 60-year-old American man nearly lost his sanity after following dietary advice from ChatGPT that led him to replace regular table salt with a chemical commonly used for cleaning swimming pools.

The man was hospitalized for three weeks, suffering from severe hallucinations, paranoia, and anxiety. Doctors reported the case in a US medical journal, revealing he had developed bromism — a rare condition caused by excessive bromide intake, virtually eradicated since the 20th century.

Instead of sodium chloride (table salt), the man used sodium bromide, a toxic compound once found in sedative pills but now primarily used in pool-cleaning products. Symptoms of bromism include psychosis, delusions, skin rashes, and nausea. Historically, bromism accounted for up to 8% of psychiatric hospital admissions in the 19th century.

The situation worsened when the man arrived at an emergency room claiming his neighbor was poisoning him, despite having no prior history of mental illness.

Concerned by this, doctors tested ChatGPT themselves and found that the AI still suggested sodium bromide as a salt substitute, without warning of any health risks.

Published in the Annals of Internal Medicine, this case raises alarms about the potential for AI-generated advice to cause preventable health problems. It serves as a stark reminder that machine-generated guidance can sometimes lead to dangerous outcomes.

AI chatbots have previously issued questionable health advice; for example, last year a Google bot recommended “eating rocks” as a way to stay healthy—likely pulled from satirical sources.

OpenAI, the company behind ChatGPT, recently announced its GPT-5 update improves health-related responses. However, a company spokesperson cautioned users not to rely solely on AI for factual information or professional advice.

Clinical psychologist Dr. Paul Losoff warned DailyMail.com that dependence on AI poses serious risks, especially for individuals with anxiety or depression. He explained that overreliance on AI may worsen mental health symptoms such as distorted thinking, chronic pessimism, or cognitive clouding.

“People may misinterpret AI feedback, which can lead to harm,” Dr. Losoff said. This risk is heightened for those experiencing acute mental illnesses like schizophrenia, who might be prone to misreading AI responses.

He stressed that AI can make mistakes, and reliance on it during critical mental health moments could exacerbate problems rather than provide help.

Share post:

Popular

More like this
Related

French Skydiver And Two-Time World Champion Pierre Wolnik Dies In Wingsuit Accident Over Mont Blanc

French skydiver and two-time freefly world champion Pierre Wolnik,...

Thailand Esports Players Arrested Over Alleged SEA Games Cheating Scandal

Former Thailand esports athletes Naphat Warasin, 29, known as...

Woman Allegedly Forced To Sleep With 400 Men In Three Months At Tokyo Bar

A manager and a female staff member at a...

Man Fakes Kidnapping, Demands S$3,000 Ransom, Later Found Safe

A 22-year-old Bangladeshi man is set to be charged...