‘I was going to die’: Woman wrongly diagnosed with heart attack as AI error goes unchecked by doctor

Wait 5 sec.

A New York woman lived in fear for a month after an artificial intelligence system incorrectly flagged her as having suffered a heart attack during a routine medical check-up. The case has sparked fresh concerns about the growing use of AI in healthcare and the risks of doctors relying too heavily on automated diagnoses. Meg Bitchell went to her primary care doctor for a standard check-up before her insurance was set to expire. During the visit, she received an electrocardiogram (EKG) test that was analyzed by an AI system. The AI flagged her results as showing signs of a previous heart attack and indicated she was “in really bad health,” leading her doctor to refer her to a cardiologist for further testing. “I passively, for one month, thought I was going to die,” Bitchell said in a TikTok video that has since gone viral, accumulating more than 44,800 views. She lived with the fear and uncertainty for weeks while waiting for her cardiology appointment, wondering how she could have missed the signs of a heart attack. When Bitchell finally saw the cardiologist, she received surprising news. The specialist told her she was “fine” and explained that the AI had made an error. According to Bitchell, the cardiologist revealed that her primary care doctor had simply signed off on the AI’s reading without properly reviewing her chart or the results himself. AI diagnosis errors on the rise in healthcare Bitchell’s experience reflects a growing trend in medical malpractice cases involving AI tools. Data from 2024 showed a 14 percent increase in malpractice claims involving AI tools compared to 2022, with the majority stemming from diagnostic AI used in radiology, cardiology, and oncology. I found out my general check up doctor used AI to read my EKG and sent me to the cardiologist because the AI said I had a heart attack and the cardiologist was like why on earth are you here and so basically I thought I was dying for a month for no reason!!!!!!! Thank you AI!!!!!— meg “Yooper” bitchell (@MeganBitchell) September 25, 2025 AI systems that read EKGs can produce incorrect results for several reasons, including poor data quality, inadequate signal quality, or limitations in the algorithm’s training. These systems are designed to assist doctors, not replace their judgment, but they only work effectively when healthcare providers double-check the results rather than blindly accepting them. Much like the terrible artificial intelligence seen in some video games, medical AI can fail when it lacks proper oversight. One significant concern is algorithmic bias. If an AI system is trained primarily on data from one demographic group, such as white men, it may struggle to accurately interpret EKGs from patients who don’t fit that profile. Research has shown that AI algorithms can perpetuate healthcare disparities, with some studies finding that these systems perform differently across racial and ethnic groups. The “black box” nature of many AI systems adds another layer of complexity. Even doctors often cannot understand how the AI reached its conclusion, making it difficult to assess whether the diagnosis makes sense. Legal experts warn that this lack of transparency creates challenges in determining liability when AI-assisted diagnoses go wrong, as it becomes unclear whether the fault lies with the doctor, the AI system, or the software developers. This problem extends beyond healthcare, as artificial intelligence systems across different industries continue to struggle with reliability and transparency.