Asymmetric sociodemographic disparity in evidence-grounded clinical AI

Wait 5 sec.

AI-assisted clinical care may compound, rather than correct, existing health inequities. We applied Omar and colleagues' validated four-domain emergency-medicine benchmark to OpenEvidence (OE), a literature-grounded clinical LLM used by tens of thousands of US physicians daily, across 100 emergency-department cases and 20 sociodemographic labels. OE was consistent on the codified clinical decisions, triage, workup, and treatment, but diverged sharply on mental-health screening, where it flagged many historically marginalized groups between three and ten times more often than demographically unmarked cases. Cases labeled as unhoused received recommendations in 78 to 87 percent of responses (versus a 9 percent no-identifier-control rate); cases labeled as transgender in 22 to 24 percent; and Black transgender women specifically in 47 percent. A pre- registered audit of 193 free-text rationales localized the differential to the inner layer of the response, in the structure and tone of the rationale rather than the recommendation itself. Literature grounding may redistribute sociodemographic disparity in clinical AI rather than remove it. As clinical LLMs move toward agentic deployment, equity audits should examine how evidence is applied to each patient, not only whether citations are present.