<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Notes — Dr. Jeremy Tabernero, MD</title>
    <link>https://drjeremytabernero.org/notes</link>
    <atom:link href="https://drjeremytabernero.org/notes/rss.xml" rel="self" type="application/rss+xml" />
    <description>Editorial notes on hospital medicine, clinical AI, and teaching the body to patients in a way they can actually understand.</description>
    <language>en-us</language>
    <lastBuildDate>Sat, 09 May 2026 12:00:00 GMT</lastBuildDate>
    <item>
      <title>Building the body in three dimensions: a paradigm for what patients deserve to see</title>
      <link>https://drjeremytabernero.org/notes/the-body-in-three-dimensions</link>
      <guid isPermaLink="true">https://drjeremytabernero.org/notes/the-body-in-three-dimensions</guid>
      <pubDate>Sat, 09 May 2026 12:00:00 GMT</pubDate>
      <category>Patient Education</category>
      <description>If I had unlimited bandwidth at the bedside, I would not give my patients more pamphlets. I would give them a body — labeled, accurate, narrated, and theirs to walk through.</description>
      <content:encoded><![CDATA[<p>Hospital medicine, the way I practice it, is a discipline of plumbing and pumps. Air has to move in and out of alveoli. Blood has to move forward against resistance. Bile has to drain. Urine has to drain. When one of those flows breaks, the patient ends up in a hospital bed talking to me, and the work of the next forty-eight hours is to figure out which pipe is clogged, why, and what we can safely do about it.</p>
<p>The patients I take care of in Duluth are sharp. They want to know what is wrong, in their body, in language they can hold onto when they go home. The current toolkit for explaining that — a printed pamphlet, a 90-second YouTube link, a poorly drawn whiteboard sketch — is not equal to the conversation the disease deserves. Heart failure is not &apos;your heart is weak.&apos; Pneumonia is not &apos;you have a chest infection.&apos; Cholangitis is not &apos;your bile is infected.&apos; These are three-dimensional problems, with anatomy and timing and consequence, and a one-page handout does not honor that.</p>
<p>What I want for my patients is closer to what I have in my head — the kind of mental model a clinician spends a decade building. When I think about acute respiratory distress from pneumonia, I see a tree of bronchi narrowing into bronchioles narrowing into alveoli, and I see the pus and serum filling those alveoli from the bottom up like rain in a bucket, and I see the oxygen molecules that can no longer cross the wall because there is fluid where air should be. I want the patient to see that too, in three dimensions, anatomically correct, with a calm voice telling them what is happening and what each medication is doing about it.</p>
<p>This is the version of clinical AI I am most excited about. Not a microphone in the room transcribing the visit. A trained, validated visualization engine that takes the diagnosis on the chart and produces a precise, anatomically correct, narrated walk-through of the body — at the patient&apos;s reading level, in their language, on a tablet they can take home and replay with their family. Studios like Nucleus Medical Media have spent twenty years building exactly this kind of content for hospitals and pharmaceutical companies. The next generation of these tools will be patient-specific, conversational, and live in the discharge envelope alongside the prescription list.</p>
<p>I am building toward that paradigm on this site, one organ system at a time. The Teaching Lab now includes five new visualizations on the conditions I most often need to explain at the bedside — and that I most often see explained badly:</p>
<p>— Acute respiratory distress from pneumonia. The lungs are an inverted tree, and pneumonia is a flood at the leaves. The visualization shows the alveoli filling with bacterial exudate, the gas-exchange membrane that can no longer do its job, and what we are trying to accomplish with antibiotics, oxygen, and — when it gets bad enough — mechanical ventilation.</p>
<p>— Live intubation. There is a particular fear in the family&apos;s eyes when I say the word &apos;breathing tube.&apos; Most of that fear is the unknown. The visualization shows the sagittal anatomy of the airway, the laryngoscope blade lifting the tongue and epiglottis, the endotracheal tube passing through the vocal cords, and the cuff inflating below the cords to seal the trachea. It is a precise, brief, repeatable procedure. Seeing it removes some of the fear.</p>
<p>— Cardiogenic shock and the seminal DanGer Shock trial. When the heart muscle fails after a massive heart attack, mortality has historically been north of 50%. The DanGer Shock trial (NEJM 2024) was the first randomized trial in two decades to show a survival benefit from microaxial flow pumps (Impella) in selected STEMI cardiogenic shock patients. The visualization walks through the failing left ventricle, the catheter snaking from the femoral artery up the aorta into the heart, and the way the pump unloads the ventricle while the muscle has time to recover.</p>
<p>— Acute kidney injury from an obstructing stone. The kidneys depend on flow. When a calcium oxalate stone wedges itself in the ureter, urine backs up into the renal pelvis, the pressure climbs, and within hours the nephrons start to die. The visualization shows the obstructed ureter, the dilated collecting system (hydronephrosis), and the back-pressure injury on the nephron itself. It also shows what we do about it — IV fluids, pain control, a urology procedure to relieve the obstruction.</p>
<p>— Cholangitis and acute liver failure from biliary obstruction. The bile ducts are a drainage system. When a stone blocks the common bile duct, bile backs up into the liver, the pressure rises, bacteria translocate into the bloodstream, and the patient becomes septic within hours. The visualization shows the biliary tree, the obstructing stone at the ampulla, the inflamed liver upstream, and the urgent ERCP that decompresses the system.</p>
<p>There is a sixth visualization I keep coming back to as the paradigm itself: the volume-overloaded heart. When the body retains too much fluid — from heart failure, from kidney failure, from too much IV salt — every chamber of the heart has to push harder against more resistance. The walls stretch. The muscle thins. The pump weakens. Showing the patient a heart inflating like an over-filled balloon, with the stretched walls visibly thinning, communicates the why of every diuretic and every salt restriction in a way that no number on a lab report ever will.</p>
<p>My posture on clinical AI has not changed. I still want the tools that touch my patient to be validated against the patients I actually see. I still keep the microphone off when I am taking a history. But the version of AI that takes a diagnosis and gives the patient back a clearer picture of their own body — that is the version I am going to keep building toward, on this site and in the room.</p>
<p>The body is not a metaphor. It is plumbing and pumps and signaling and repair. It deserves to be taught that way.</p>]]></content:encoded>
    </item>
    <item>
      <title>Why I only adopt clinically validated tools</title>
      <link>https://drjeremytabernero.org/notes/validated-tools-only</link>
      <guid isPermaLink="true">https://drjeremytabernero.org/notes/validated-tools-only</guid>
      <pubDate>Fri, 08 May 2026 12:00:00 GMT</pubDate>
      <category>Clinical AI</category>
      <description>There is nothing courageous about waiting for evidence. It is the baseline of how medicine has always advanced.</description>
      <content:encoded><![CDATA[<p>There is a particular kind of pitch I get more of every month. A vendor at a conference, a LinkedIn DM, a system-wide email about a transformative new platform. The product is always elegant. The dashboard is always beautiful. The pilot data is always promising. And almost none of it has been validated against the population of patients I actually take care of in Duluth.</p>
<p>I don&apos;t say this as a Luddite. I say it as someone who has watched well-meaning hospital systems deploy clinical-decision tools that quietly worsened care for the patients they were supposed to help. The Epic Sepsis Model, deployed at hundreds of hospitals before its external validation in 2021 showed sensitivity around 33%. Risk calculators built on cohorts that don&apos;t look like rural Minnesota. Sleep-apnea screening tools that flagged demographic groups they were never trained on.</p>
<p>The pattern is consistent: a product looks impressive in its training environment, gets purchased on the strength of a slide deck, and meets a real population whose data it was not designed for. The patient at the bedside does not know the model was trained at an academic medical center in California. They only know that something happened — or didn&apos;t — because of a number on a screen.</p>
<p>So my rule is simple. Before I rely on any clinical tool — AI or otherwise — I ask three questions:</p>
<p>1. Was it validated on a population that resembles mine?</p>
<p>2. Has the validation been published, peer-reviewed, and replicated?</p>
<p>3. Who owns the workflow when the model is wrong?</p>
<p>If a vendor cannot answer all three plainly, the answer is not yet. Not no — not necessarily — but not yet, and not in the room with my patient.</p>
<p>There is nothing courageous about waiting for evidence. It is the baseline of how medicine has always advanced. We do not adopt a new chemotherapy because the molecule is interesting. We adopt it because a randomized trial showed it works, and then we watch the post-market data closely. The bar for software in the clinical encounter should be no lower.</p>
<p>The argument I sometimes hear is that AI moves too fast for traditional validation — that by the time a study is published, the model has been retrained twice. I have sympathy for the engineering reality there. But the answer is not to lower the bar; it is to design new validation methods that match the cadence of the technology. Continuous evaluation, transparent versioning, real-world performance dashboards, model cards in the chart. None of this is exotic. It just requires that the people building the tools take the responsibility seriously.</p>
<p>What I refuse to do is accept the trade where my patient becomes part of someone&apos;s silent A/B test.</p>
<p>There is a quieter cost to early adoption that vendors do not mention: trust. Patients are not stupid. They notice when the screen knows more than the doctor. They notice when the doctor stops looking at them. They notice when the medication recommendation has a logo on it. Every time a clinical tool fails them — even invisibly — it costs us a little bit of the relationship that the entire practice rests on.</p>
<p>Validated tools, used carefully, can return that trust with interest. An imaging algorithm that catches a small bleed I might have missed is a good thing. A risk score that sharpens an already-considered decision is a good thing. A documentation assistant that lets me look up at the patient instead of down at the keyboard could be a very good thing — when it works, when it&apos;s been validated, when its limits are documented.</p>
<p>That last one, ambient AI scribes, is where my line is currently drawn. I&apos;ll write more about that next.</p>]]></content:encoded>
    </item>
    <item>
      <title>The case against ambient AI scribes at the bedside</title>
      <link>https://drjeremytabernero.org/notes/against-ambient-scribes</link>
      <guid isPermaLink="true">https://drjeremytabernero.org/notes/against-ambient-scribes</guid>
      <pubDate>Tue, 05 May 2026 12:00:00 GMT</pubDate>
      <category>Clinical AI</category>
      <description>Ambient AI scribes are coming for the patient encounter, and the conversation about them has been almost entirely one-sided.</description>
      <content:encoded><![CDATA[<p>Ambient AI scribes are coming for the patient encounter, and the conversation about them has been almost entirely one-sided.</p>
<p>The pitch is seductive. A microphone on the wall captures the visit. A model produces a draft H&amp;P, problem list, billing codes, and patient summary before the doctor leaves the room. Documentation time drops 30 to 60 percent in published pilots. Burnout improves. Throughput goes up. Major systems — Permanente, MGB, HCA, UNC — have already rolled out scribes to thousands of clinicians. It is not hypothetical anymore.</p>
<p>I am not going to use one. Not at the bedside. Not yet, and probably not in the form they are currently sold. Here is why.</p>
<p>The patient did not consent to a transcript. When I take a history, the patient is telling me things they have not told their spouse. They are telling me about a substance use history they will deny on a form. They are telling me they think their daughter is being abused. They are telling me they want to stop chemotherapy without telling their family yet. The professional contract is that what they say to me, in that room, becomes a clinical note — not a verbatim audio file processed by a third-party language model and stored on a server I have never audited. The consent form they signed at registration does not, in any meaningful sense, capture what is happening when an LLM is in the room.</p>
<p>The note is not the artifact. The encounter is the artifact. A good progress note is a synthesis. It reflects what mattered out of forty minutes of conversation, weighted by judgment that took a decade to build. An ambient scribe produces a transcript-flavored document — comprehensive, plausible, frequently wrong about which detail mattered. I have read enough AI-drafted notes now to recognize the pattern: the chief complaint is technically accurate, the assessment is hedged into uselessness, and the plan reads like the bullet-point version of every plan that has ever existed. Editing it into a real note takes nearly as long as writing one from scratch — and now I have also surrendered the cognitive act of writing, which is half of how I think through the case.</p>
<p>The accountability vanishes. When I write a note and sign it, I am the author. When I sign an AI-drafted note, the legal accountability is mine but the cognitive authorship is not. This is a category we have not had in medicine before, and it is being normalized at scale before any of us have figured out how it should be governed. If a model hallucinated a diagnosis I did not consider, and I signed it, and the patient was harmed — what is my defense? &quot;The system drafted it&quot; is not a defense. It is an admission that I outsourced the most important sentence in the chart.</p>
<p>Patients can tell. The data on this is preliminary, but the qualitative reports are consistent: patients notice when their doctor is no longer mentally writing the note in their head while listening. They notice when the doctor relaxes into the conversation in a way that feels like talking to a friend instead of a physician. Some of them like it. Many of them, including the older patients I see most days, find it slightly disorienting — and a small number find it actively alarming once they understand what is happening.</p>
<p>To be clear: I think the underlying technology is real, and I think a future version of this — one with patient-controlled consent, on-device processing, transparent retention, validated accuracy on the populations who actually use it, and audit trails I can inspect — could be a meaningful improvement. The right answer is not to refuse it forever. The right answer is to refuse it now, on these terms, and to be specific about what would change my mind.</p>
<p>It is also worth being precise about what &quot;ambient AI&quot; actually is, because the branding is doing a lot of work. The speech-to-text layer underneath most of these scribe products is OpenAI&apos;s Whisper — an automatic-speech-recognition model OpenAI open-sourced in September 2022, years before this became a category that vendors could charge per-seat for. Whisper is good. It has been good for a while. The novel work in an ambient scribe is not the microphone; it is the large language model that turns a transcript into a SOAP note, the consent flow that wraps the recording, the storage policy on the audio, and the workflow that decides who reviews the draft. When the marketing asks &quot;is AI ready for medicine?&quot; it is collapsing four very different questions into one slogan. Whisper-grade transcription has been ready since 2022. The summarization model trained on someone else&apos;s notes, the audit trail, and the per-patient consent — those are the questions that have not been answered, and those are the ones I am refusing on.</p>
<p>What would change my mind:</p>
<p>— Per-encounter, opt-in patient consent that is more than a checkbox at registration.</p>
<p>— Local or on-prem processing with no third-party retention by default.</p>
<p>— Published validation of note accuracy on the real distribution of patients I see, including elderly, multilingual, and chronically ill populations who are systematically underrepresented in pilot data.</p>
<p>— A clear governance policy at my institution about who reviews drafted notes, who is responsible for errors, and how disagreements between the model and the clinician are tracked.</p>
<p>Until those exist, the microphone stays off when I&apos;m in the room.</p>]]></content:encoded>
    </item>
    <item>
      <title>AI as teaching, not transcription</title>
      <link>https://drjeremytabernero.org/notes/ai-as-teaching</link>
      <guid isPermaLink="true">https://drjeremytabernero.org/notes/ai-as-teaching</guid>
      <pubDate>Fri, 01 May 2026 12:00:00 GMT</pubDate>
      <category>Patient Education</category>
      <description>If ambient scribes are the version of clinical AI I am most cautious about, patient-facing visualization is the version I am most excited about.</description>
      <content:encoded><![CDATA[<p>If ambient scribes are the version of clinical AI I am most cautious about, patient-facing visualization is the version I am most excited about.</p>
<p>Here is the asymmetry. A scribe processes the patient. A teaching tool processes the disease, and gives it back to the patient in a form they can actually understand.</p>
<p>I take care of patients with congestive heart failure. The mortality of CHF, properly staged, is worse than most cancers. And yet the explanation a typical patient gets when they leave the hospital is some version of &quot;your heart is weak, take these pills and watch your salt.&quot; This is not because doctors are lazy. It is because there is no time, and because the available tools — a printed handout, a 90-second YouTube link, a poorly drawn whiteboard sketch — are not good enough to do the work the conversation actually requires.</p>
<p>What I want for my patients is closer to what I have in my head. When I explain heart failure, I do not see a list of medications. I see a pump that is being asked to push too much volume against too much resistance, which causes the chamber to stretch, which causes the muscle to thin and weaken, which causes the kidneys to retain more salt because they think the body is dehydrated, which adds more volume, which stretches the chamber further. It is a feedback loop that lives in three dimensions, and it is what I am trying to interrupt with every dose of furosemide and every conversation about salt.</p>
<p>I cannot draw that on a napkin in the time I have. But a good 3D animation — anatomically correct, accurately scaled, narrated in the patient&apos;s language — can. Studios like Nucleus Medical Media have spent years building exactly this kind of content, validated and used by hospitals across the country for patient education. The animations are not gimmicks; they are pedagogy that respects how human beings actually learn.</p>
<p>The teaching lab on this site is a curated collection of those visualizations, organized around the conditions and decisions I most often need to explain at the bedside: how GLP-1 medications change the body, what end-of-life care after a devastating stroke actually looks like, why muscle disappears in hospitalized patients and how it is rebuilt, how the immune system remembers a pathogen for a lifetime, what vaping is doing to a generation of lungs, the trajectory of alcohol-associated liver disease, and end-of-life care for chronic disease. Each one is paired with a short note from me — what I want the patient to take away, what the visualization gets right, and where the analogy breaks down.</p>
<p>This is what I think AI&apos;s near-term role in medicine should look like. Not standing between the doctor and the patient, but standing behind both of them, holding up a clearer picture of the body so the conversation can be a real one.</p>]]></content:encoded>
    </item>
  </channel>
</rss>
