Hi!

Welcome to AIMedily.

Going to live events is one of my favorite things to do, and this week was a fun one. I got to see Hamilton with my daughter and Bruno Mars in concert.

There is something special about watching people who are truly excellent at their craft — the timing, the preparation, the emotion, and the way everything comes together in real time.

Medicine is very different, of course. But in its own way, there is still art in it: judgment, timing, intuition, and humanity.

And as AI becomes part of medicine, I keep thinking about how we use these tools well without losing that human side.

Let’s dive into today’s issue.

🤖 AIBytes

Researchers built SurvivEHR, an artificial intelligence model trained on UK primary care records. The goal was to learn patient health patterns over time and predict future diagnoses, medications, tests, and death risk.

🔬 Methods

  • Population: 26.5 million patients from 1,478 general practices.

  • Training data: More than 7.6 billion coded health events, including diagnoses, medications, and tests.

  • Model: SurvivEHR was designed to predict the type and timing of the next health event.

  • Clinical scope: The model included 263 event types: 74 long-term conditions, 81 medication classes, and 108 test types.

📊 Results

  • SurvivEHR predicted the next health event better than simpler methods.

  • After fine-tuning, it also outperformed comparison models for 5-year risk prediction after type 2 diabetes.

  • It performed best when predicting future hypertension, cardiovascular disease, and new long-term conditions.

  • The model needed fine-tuning for longer-term prediction.

  • Pretraining was most helpful when smaller datasets were used, especially below 100,000 patients.

🔑 Key Takeaways

  • SurvivEHR shows how foundation models may help predict health risks from long-term electronic health records.

  • Its main strength is learning from years of patient history, not just one visit or one disease.

  • It may help researchers build better tools for patients with multiple chronic conditions.

  • It is not ready for clinical use without further validation, especially outside UK primary care.

🔗 Gadd C, Gokhale K, Acharya A, et al. SurvivEHR: a competing risks, time-to-event foundation model for multiple long-term conditions from primary care electronic health records. npj Digit Med. 2026. doi:10.1038/s41746-026-02709-z

Stanford researchers tested tested an AI system that wrote draft hospital course summaries during real inpatient care at Stanford Health Care.

🔬 Methods

  • Participants: 384 discharges from 331 hospitalized patients

  • AI system: MedAgentBrief using Gemini 2.5 Pro

  • The AI wrote nightly draft summaries from clinical notes

  • Hospitalists reviewed, edited, used, or discarded the drafts

  • Main outcome: physician-reported patient safety risk

📊 Results

  • The AI generated 1,274 summaries

  • AI text was used in 57% of discharge notes

  • Physicians rated safety for 100 summaries

  • 88% had no harm potential

  • 1 summary was rated as likely to cause moderate harm

  • No summaries were rated as severe harm or death

  • Main problems were:

    • omissions (25%)

    • inaccuracies (20%)

    • hallucinations (2%)

  • Burnout scores improved (1.75 to 1.20; P=.03)

  • Time savings were small and not statistically significant

🔑 Key Takeaways

  • AI summaries were used in more than half of discharge notes

  • Most reviewed summaries had low safety risk

  • Missing information and inaccuracies were more common than hallucinations

  • AI may help with documentation burden, but physicians still need to review the output

🔗 Grolleau F, Liang AS, Keyes T, et al. Physician-reported safety outcomes of AI-generated hospital course summaries. JAMA Netw Open. 2026;9(5):e2616556. https://doi.org/10.1001/jamanetworkopen.2026.16556

🦾TechTools

  • Toku uses AI to analyze retinal images for signs of disease risk beyond the eye.

  • One of its tools, MyKidneyAI, is being developed to detect elevated chronic kidney disease risk in people with diabetes from routine eye images.

  • May become a practical window into earlier risk detection for kidney, cardiovascular, and metabolic disease.

  • An AI search assistant for medical questions.

  • You can ask a clinical question, and it looks across papers, guidelines, FDA information, and clinical trials.

  • What I like is that the answers are traceable. In medicine, getting a quick answer is helpful — but being able to check where the source is important.

📈This week’s productivity tool:

  • Is a simple task manager, but its newer AI features make it more useful.

  • It can take messy inputs — notes, emails, PDFs, images, or whiteboard photos — and turn them into organized tasks.

  • If you’re juggling clinical work, research, writing, and life, this is the kind of tool that can help you reduce mental clutter.

🧬AIMedily Snaps

This is what’s been happening in AI in medicine this week:

  • Google DeepMind introduces an AI co-clinician model for healthcare (Link).

  • Perplexity adds premium medical sources from NEJM and BMJ. (Link)

  • Doximity introduces a Clinical AI Suite for documentation and clinical workflows (Link).

  • FDA expands its AI tools with Elsa 4.0 and a new data platform (Link).

  • Harvard and Beth Israel Deaconess study tests AI in emergency diagnosis and triage(Link).

  • Mayo Clinic AI detects early signs of pancreatic cancer on routine CT scans (Link).

🧪Research Signals

Papers worth your attention this week:

  • Nature: Clinician engagement shapes the impact of AI-based ECG screening for chronic liver disease in primary care (Link).

  • NEJM AI: Ambient AI in Clinical Practice — The Legal Landscape of Recording Consent Requirements. Consent for ambient scribes is not just a workflow issue; it is also a legal and trust issue (Link)

  • Nature: PASTEC an open clinical infrastructure for AI in cardiac remote monitoring. Built to collect and validate AI-ready cardiac data inside real workflows (Link).

  • JAMA: Promoting Clinical Expertise in the Age of AINo Struggle, No Mastery (Link).

  • Nature: What the AI era doctor should know. A scoping review mapped AI competencies for medical education into 7 domains and 37 competencies (Link).

🧩TriviaRX

Before pulse oximetry became routine, how did clinicians often detect low oxygen levels?

A) By waiting for visible cyanosis or checking arterial blood gases
B) By estimating oxygenation from respiratory rate and work of breathing
C) By using intermittent end-tidal carbon dioxide measurements
D) By relying on blood pressure and heart rate changes during hypoxemia

Now, the answer from last week’s TriviaRX: C) Blood transfusion

The first direct blood transfusion to a human was performed in 1667 by Jean-Baptiste Denis. Early results seemed promising, but deaths followed, and the procedure was banned until blood groups were understood much later.

That’s it for today.

As always, thank you for taking the time to read.

If this helped you think more clearly about where AI in medicine is actually going, share AIMedily with a colleague.

Until next week.

Itzel Fer, MD PM&R

Follow me on LinkedIn | Substack | X | Instagram

Forwarded this email? Sign up here

P.S. What to help me? 👉 Write a review here (it takes less than a minute).

How did you like today's newsletter?

Login or Subscribe to participate