12/13/2025
Both physicians and large language models face relentless pressure to always have an answer, rewarding confident-sounding responses over admitting uncertainty. Medical culture prizes authority over accuracy, while LLMs are trained on datasets contaminated with biased research, flawed health records, and “publish or perish” science. The result? AI learns to replicate human misinformation at scale, generating hallucinations that sound authoritative but perpetuate the same BS that trained them. Breaking this cycle demands epistemic humility in medical education and abstention mechanisms in AI—teaching both humans and machines that “I don’t know” is sometimes the most competent answer. Unfortunately, “I don’t know” doesn’t generate revenue or win over venture capitalists.
https://www.bmj.com/content/391/bmj.r2570