Speak your PR description, bug reproduction, or Cursor prompt. Wispr Flow auto-tags file names, preserves variable names, and formats everything for immediate paste into GitHub, Jira, or your editor.
No re-typing. No context gaps. No mangled syntax. Works natively inside Cursor, Warp, and every IDE at the system level.
4x faster than typing. 89% of messages sent with zero edits. Used by engineering teams at OpenAI, Vercel, and Clay.
Hi!
Welcome to AIMedily.
OpenAI just released a version of ChatGPT designed for clinicians.
It’s meant to support real clinical work—like reviewing evidence and helping with documentation—but access isn’t open to everyone.
Right now, it’s limited to verified U.S. clinicians, so you have to register and go through a credential check to use it.
It feels like a meaningful step forward, but still early. Let’s see what happens in the next few weeks.
Now, let’s dive into today’s issue.
🤖 AIBytes
Researchers tested whether physicians trained in AI literacy are still influenced by incorrect AI recommendations during diagnosis.
🔬 Methods
Randomized clinical trial
44 physicians completed AI literacy training (20 hours)
Diagnosed 6 clinical cases each (264 total cases)
Two groups:
Control: received correct AI suggestions
Treatment: received AI suggestions with errors in 3 cases
Physicians could choose whether to consult AI
📊 Results
Physicians with incorrect AI suggestions performed worse
Diagnostic reasoning accuracy:
84.9% (correct AI) vs 73.3% (incorrect AI)
↓ 14 percentage points
Top diagnosis accuracy:
90.5% vs 76.1%
↓ 18 percentage points
Effect persisted despite prior AI training
More experienced physicians showed larger drops in performance

🔑 Key Takeaways
Physicians rely on AI suggestions—even when they are wrong
AI errors can reduce diagnostic accuracy
Training alone does not prevent this effect
Safe use of AI requires strong safeguards and oversight
🔗 Qazi IA, Ali A, Khawaja AU, et al. Automation Bias in Large Language Model–Assisted Diagnostic Reasoning among Physicians Trained in AI Literacy — A Randomized Clinical Trial. NEJM AI. 2026;3(5).
https://doi.org/10.1056/AIoa2501001
Researchers trained an AI model to predict structural heart disease using routine electrocardiograms, using echocardiograms as the reference standard.
🔬 Methods
Dataset/model study
100,000 electrocardiograms (ECGs) from 36,286 adults
Each ECG paired with an echocardiogram result
Labels: presence of moderate or greater structural heart disease (SHD)
📊 Results
About half of patients had structural heart disease.
The model showed good overall detection (AUROC 82).
It identified 70% of true cases (missed ~30%).
It correctly ruled out disease in 78% of healthy patients.
Performance varied across specific conditions.

🔑 Key Takeaways
The model learns to predict echo findings from ECG data.
ECG may contain more structural information than used today.
Performance is moderate and not diagnostic.
🔗 Hughes JW, Jing L, Finer J, et al. EchoNext-Mini: A Dataset and Baseline AI Model for Detecting Structural Heart Disease from Electrocardiograms. NEJM AI. 2026;3(5). https://doi.org/10.1056/AIdbp2500516
🦾TechTools
AI platform for post-surgical recovery monitoring.
Tracks symptoms and alerts care teams.
Extends care beyond hospital discharge.
Combines AI scribe, clinical assistant, and EHR workflow into one platform.
Generates structured notes and automates follow-up tasks.
Moves from documentation to a clinical operating system.
📈 Productivity AI Tool for the week:
Develops high-quality generative AI models for images with strong realism and control.
Useful for creating visuals, diagrams, and educational content quickly.
Offers flexible control over style, composition, and detail for content creation.
🧬AIMedily Snaps
ChatGPT for Clinicians is now available for free to verified individual clinicians in the U.S. (Link).
OpenAI introduce GPT‑Rosalind for life sciences research (Link).
Google and the J&J Foundation are launching a $10 million initiative to train rural U.S. healthcare workers in AI (Link).
Abridge expands clinical decision support solution with UpToDate partnership, new NEJM, JAMA content tie-ups (Link).
Forbes AI 50 2026 list (Link).
Gemini adds mental health tools to improve safety (Link).
🧪Research Signals
Nature: Show us the evidence for the value of medical AI (Link).
Nature: Is AI actually improving healthcare? (Link).
JMIR: Quality of Clinical Notes Created by Ambient Listening Generative AI: Pragmatic Prospective Pilot Study (Link).
NEJM: Trust, Scrutiny, or Collaboration? A Performance-Based Framework for Human–AI Interaction in Medicine (Link).
The Lancet: Research priorities for data science and artificial intelligence in global health: an international consensus exercise (Link).
JAMA: Changes in Clinician Time Expenditure and Visit Quantity With Adoption of Artificial Intelligence–Powered Scribes (Link).
🧩TriviaRX
In the 1950s, a physician noticed that premature infants were going blind at unusually high rates in hospitals.
What routine medical practice was later identified as the main cause?
A) Excess oxygen therapy
B) Early antibiotic use
C) Incubator light exposure
D) Vitamin deficiency
Now, the answer from last week ✅ D) All of the above
👉 Retinal fundus photos contain subtle vascular signals that allow AI models to estimate systemic risk factors, making the eye a potential noninvasive biomarker for cardiovascular disease.
That’s it for today.
As always, thank you for taking the time to read.
You’re already ahead of the curve in medical AI — don’t keep it to yourself. Forward AIMedily to your colleagues who’d appreciate the insights.
Until next Wednesday.
Itzel Fer, MD PM&R
Forwarded this email? Sign up here
P.S. Enjoying AIMedily? 👉 Write a review here (it takes less than a minute).







