This newsletter you couldn’t wait to open? It runs on beehiiv — the absolute best platform for email newsletters.
Our editor makes your content look like Picasso in the inbox. Your website? Beautiful and ready to capture subscribers on day one.
And when it’s time to monetize, you don’t need to duct-tape a dozen tools together. Paid subscriptions, referrals, and a (super easy-to-use) global ad network — it’s all built in.
beehiiv isn’t just the best choice. It’s the only choice that makes sense.
Hi!
Welcome to AIMedily.
Are you ready for the holidays? Honestly, I’m not. My family is traveling to New York City, and I still have presents to buy, suitcases to pack and plans to prepare.
But since it’s one of my favorite times of the year, I’ve been working on a small gift I’ll share with you next week 🎁.
For now, let’s dive into today’s issue.
🤖 AIBytes
Researchers from Stanford, Harvard and Kaiser Permanente studied how physicians enter clinical information into LLM chatbots—and whether different inputs affect clinical reasoning performance.
🔬 Methods
Participants:
22 U.S. physicians interviewed (mostly inpatient and academic)
Chat logs from 67 physicians across two Randomized Controlled Trials.
LLMs: GPT-4
Tasks: Clinical vignettes based on real patient cases.
📊 Results
Physicians used four main input approaches:
Copy-paster: pastes the entire case
Selective copy-paster: pastes only pieces sections
Summarizer: writes a short case summary in their own words
Searcher: uses brief, search-style queries
Most common approaches: copy-pasting and searching.
No single type was associated with scoring higher on clinical cases.
No input approach led to higher clinical reasoning scores for diagnostic or management tasks.
Physicians described clear tradeoffs:
Copy-pasting → more complete but often longer outputs
Summarising → more cognitive engagement, but risk of missing details

🔑 Key Takeaways
Providing more information to an AI chatbot did not improve clinical reasoning performance.
How physicians filter and use the information appears more important for clinical decision making.
Different input styles reflect personal reasoning strategies.
Training clinicians on AI should focus on judgment, interpretation, and task alignment, not prompt length.
Future research should study prompt styles in controlled settings to identify which strategies are most effective.
🔗 Siden R, Kerman H, Gallo RJ, et al. A typology of physician input approaches to using AI chatbots for clinical decision-making. npj Digital Medicine. 2025. doi:10.1038/s41746-025-02184-y
Researchers tested whether GPT-4 can spot and fix biased language in emergency department notes.
They measured accuracy, looked for patterns linked to biased wording, and checked if revised text was acceptable to clinicians.
🔬 Methods
Design: Retrospective study comparing GPT-4 detections with human labels.
Data:
50,000 Emergency Department notes from Mount Sinai.
500 discharge summaries from an Intensive Care Database.
Bias types:
Discrediting
Stigmatising/labeling
Judgmental
Stereotyping
Evaluation:
Two human reviewers checked GPT-4’s outputs.
They tested links between bias and clinical factors, such as emergency department visits, substance-use visits, shift timing, and provider role.
Revision check: Physicians scored GPT-4’s replacement language on a 10-point scale.
📊 Results
Model performance
Sensitivity: 97.6%
Specificity: 85.7%
Prevalence of bias
6.5% of Mount Sinai notes.
7.4% of Intensive Care Database notes.
Factors linked to biased wording
Frequent emergency department use
Substance-use presentations
Overnight shifts
Provider type (physicians > nurses)
Housing status
Un domiciled patients: 27.0% bias prevalence
Domiciled patients: 5.5%

🔑 Key Takeaways
GPT-4 detects biased phrasing with accuracy close to human reviewers.
Bias appears more often in notes for un domiciled patients, frequent ED users, and substance-use visits.
Physicians produce biased wording more often than nurses.
GPT-4 offers clear, acceptable alternative wording, which may help improve note quality.
This method creates a scalable way to monitor and reduce biased documentation.
🔗 Apakama DU, Nguyen KA-N, Hyppolite D, et al. Identifying Bias at Scale in Clinical Notes Using Large Language Models. Mayo Clin Proc Digit Health. 2025;3(4):100296. doi:10.1016/j.mcpdig.2025.100296.
🦾TechTools
A visualization platform that uses augmented reality to overlay patient-specific anatomy during surgical planning and intraoperative navigation.
Particularly relevant for complex procedures where spatial understanding matters.
Best understood as a decision-support and planning aid.
Explores multiple questions at the same time to make sense of large amounts of information.
Pulls together what matters most, helping you see patterns and priorities quickly.
Ideal for when you need high-level clarity on a topic.
Records, transcribes, and organizes conversations.
Creates searchable summaries.
Not designed for clinical documentation — but useful for administrative, research, and coordination work.
🧬AIMedily Snaps
2026 healthcare AI trends: Insights from experts (Link).
New AI Tool Identifies Not Just Genetic Mutations, But the Diseases They May Cause (Link).
Google Deepmind is strengthening their partnership with the UK government to support AI (Link).
Will AI improve or exacerbate equity in Health Care? (Link).
Can AI chatbots provide informed consent information for oncological surgeries? (Link).
OpenEvidence partners to expand access to emergency medicine resources (Link).
🧩TriviaRX
Which of the following is true about one of the earliest AI systems used in medicine?
A) It was legally required to follow physician recommendations.
B) It outperformed specialists and was never used clinically.
C) Physicians trusted the recommendations more than those of residents.
D) It was approved by the FDA for autonomous use.
Now, let’s see if you got the right answer last week 🥁
✅ B) Electromyography
In 1960, EMG signals were the first real physiological signals ever analyzed with machine-learning methods.
We’re done for today.
As always, thank you for taking the time to read.
I hope you have a wonderful holiday!
Until next week.
Itzel Fer, MD PM&R
Forwarded this email? Sign up here
P.S. Enjoying AIMedily? Help me with a review 👉Here (it takes less than a minute).








