In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects (Nick Mokey/VentureBeat)

Wait 5 sec.

Nick Mokey / VentureBeat:In an Oxford study, LLMs correctly identified medical conditions 94.9% of the time when given test scenarios directly, vs. 34.5% when prompted by human subjects  —  Headlines have been blaring it for years: Large language models (LLMs) can not only pass medical licensing exams but also outperform humans.