4 ER Horror Stories From People Who Asked AI for Medical Advice

Wait 5 sec.

Artificial intelligence has officially joined the list of things people shouldn’t use to self-diagnose. Between Reddit, wellness influencers, and now AI chatbots, the internet has become a revolving door of medical misinformation, and some of it’s sending people straight to the ER.Dr. Darren Lebl of New York’s Hospital for Special Surgery told The New York Post that “a lot of patients come in and challenge their doctors with something they got from AI.” About a quarter of those “recommendations,” he added, are made up. Research published in Nature Digital Medicine this year found that most major chatbots no longer display medical disclaimers when giving health answers. That’s a big problem.Here are a few real-life cases where AI’s bedside manner went south fast.1. The hemorrhoid from hellA Moroccan man asked ChatGPT about a cauliflower-like lesion near his anus. The bot mentioned hemorrhoids and suggested elastic ligation—a procedure that uses a rubber band to cut off blood flow to swollen veins. He attempted it himself with a thread. Doctors later removed it after he arrived at the hospital in agony. As it turned out, the growth wasn’t a hemorrhoid but a 3-centimeter genital wart.2. The sodium swapA 60-year-old man wanted to reduce salt in his diet. ChatGPT told him to replace table salt with sodium bromide, a chemical used to clean swimming pools. He did, for three months. He was hospitalized with bromide poisoning, suffering hallucinations and confusion, and his case was documented in the Annals of Internal Medicine: Clinical Cases.3. The ignored mini-strokeAfter heart surgery, a Swiss man developed double vision. When it returned, he asked ChatGPT, which told him such side effects “usually improve on their own.” He waited a day too long and suffered a transient ischemic attack—a mini-stroke that could have been prevented with immediate care. Researchers wrote about his unfortunate case in Wien Klin Wochenschr.4. The suicide “coach”In California, parents sued OpenAI after claiming ChatGPT validated their teenage son’s self-harm plans and failed to flag his suicidal ideations. The case has renewed calls for guardrails on mental health responses and crisis escalation.AI can explain symptoms, summarize studies, and prep you for doctor visits. But it can’t feel urgency, spot panic, or call an ambulance. And that gap, as these stories show, can be lethal.The post 4 ER Horror Stories From People Who Asked AI for Medical Advice appeared first on VICE.