🩻

How People Are Using AI For Clinical Diagnosis

Doctors and patients are now throwing hard cases at frontier models (e.g., GPT-5 Pro) and reporting striking hits—from photo-plus-symptom identifications to prioritized differentials that read like a top subspecialist’s note. The vibe in user stories is awed and practical (“it cracked what stumped us for months”), while experts project a sharper, more split energy: some point to validated wins (sepsis triage, retina-based lupus), agentic systems claiming big accuracy gains, and med-school curricula catching up; others flag nuance—RCTs where “AI alone” beats clinicians but adding it didn’t help, and reminders that diagnosis may soon be commoditized while judgment and management remain the real game. Net: in 2025, AI is moving from clever consult to co-diagnostician—powerful, faster, often right—but still demanding guardrails, evidence, and human stewardship.

🤖 ai summary based on 17 tweets

Insights from builder, researcher, investors, and domain experts. Opinions are the authors.

Can’t load? Read tweets in preview...

How good is AI at making medical diagnoses? Good enough to help—and to hurt. For @NewYorker, I spent months talking to patients, doctors, and researchers about how to make the most of a powerful new technology and how to minimize the side effects: https://t.co/FXxaBy80l7

When AI “commoditizes diagnosis” we have only set the table stakes for the real game: wise and skillful clinical decision-making and management. It’s only the current decrepit state of access to human expert diagnostics that makes this first step at all compelling. HT @venkmurthy

Agentic A.I. vs experienced physicians for diagnosis of > 300 complex diagnostic cases: 4-fold higher accuracy and 20% lower cost https://t.co/DT8jYrKhZK @HarshaNori @chrisck @Dominic1King @mustafasuleyman @erichorvitz https://t.co/5CcTjBPLAY

Just staring at the genome is not enough. As for human experts, having the AI "know" about clustering of clinical signs and symptoms and molecular pathways can signficantly increase accuracy of diagnosis of rare and even first-of-its-kind diseases. https://t.co/oBlhu7KfIR

"But I am blown away by how much we have learned about Alzheimer’s over the last couple of years."—@BillGates https://t.co/FTRMu4IVc8 "This is the moment to spend more money on research, not less." Agree 💯 https://t.co/c7RAVzUsfc https://t.co/mq02Baoj3A https://t.co/HARNtPHLUq

Thanks to @pranavrajpurkar & team for leading this @npjDigitalMed exploration of video-text generative models, including a focus on unique opportunities in GI endoscopy, from workflow phase recogntion to integration of multimodal data. Open access: https://t.co/wFNe0QC3Bf https://t.co/Irr7Uh8k1q

I saw a patient this week with a number of complex symptoms. He used chatGPT to take his history, to plan his investigation, to formulate a differential diagnosis and treatment plan in the different outcomes. I’ve never seen such a comprehensive and accurate assessment by any human doctor! As I’ve said for years, those professionals who are silent and keeping their heads down as destined for replacement by AI! This is the end of medicine as we know it and you let the profession down with your cowardice! And now you are being replaced!

Celiac disease is an autoimmune disorder in which ingesting gluten triggers an immune response that damages the small intestine lining, impairing nutrient absorption. An accurate read on gut biospy is essential to make the diagnosis. AI seems to have achieved that threshhold. @NEJM_AI (link in reply)

Our first RCT on using an LLM on diagnostic reasoning is out! And the results are 🔥🌶️... adding ChatGPT did NOT improve diagnostic accuracy or reasoning, and the AI alone outperformed ALL the humans. What does this mean? A 🧵⬇️ https://t.co/MgkhrrkKmT

One quick check...