Grok-board: the 2025 update (instantly empathize and understand the EHR)

As tech accelerates, I wonder what has AI done for me lately? Sure it writes my notes, but does it really help me “grok” this patient? Here is my updated Grok-board design take for EHR’s.

I am worried. Just because our whiz-bang technologies can do lots of stuff, I ask: should it? I previously wrote about my Grok-board idea in 2024. In one short year, we have seen ambient notes, chart summarizers and agents take off.

Here, I still aspire to use an EHR that helps me “grok” the patient. Grok comes originally from Robert Heinlein’s “Stranger in a Strange Land” sci-fi novel from 1961, wow more than 60 years ago. He coined the term “Grok” meaning to “instantly empathize and understand.”

In my 2025 version, the left column of the EHR remains intact, with links to many useful parts of an existing EHR patient chart.

What does it mean?

  • Blue Column 1: The Patient story board. Links to patient details: demographics, allergies, primary care doc, care gaps, selected chart items.
  • Green Column 2: This Human: the patient telling us what is important to them, what are their joys and pressures.
  • Orange Column 3: Homunculus visual problem list and an AI summary of the current status of diseases with any available metrics.
  • Pink Column 4: Insights. Active and suggested AI agents, artificial intelligence entities that can obtain data, and then act on predetermined criteria to achieve a general goal, assess the patient’s top risks and suggested next steps.
  • Blue Column 5: Today: pre-visit questionnaires, solicited patient questions, the last progress note you wrote, and an AI Greek Chorus of somewhat adversarial advisors with suggestions for me.

An ideal Grok-board

should be humane and emphasize the patient’s identity and goals, so a physician can connect human-to-human and communicate more effectively. It should also increase the signal-to-noise of the information presented. It should make it quick to grok and then act. It should prioritize the most important next actions, then make doing the right thing easy

Thanks to Gregory Makoul for his groundbreaking work on connecting patient values to physician thinking. Let’s continue to debate the psychology of which displays gives the quickest, most useful view of the patient. Let’s arm-wrestle, not over sorting of the problem list, but over which display leads to better care of the patient with less cognitive burden and more joy for physicians, the clinical team, and the patient.

Taking it further

Let’s design a video gamer’s chair that helps you use all your senses. What if your left forearm vibrates to warn you of a medication allergy when you’re writing a prescription? What if your right calf feels warm if there is kidney impairment or left calf feels warm if there is liver impairment affecting your next decision. What if your lower back vibrates if you’re about to close a chart with unaddressed care gaps. What if we had smell-o-vision to detect the fruity breath of a patient in keto-acidosis?

CMIO’s take?  We are underusing our senses and our pattern-matching skills. Let’s build a way for our physicians and teams to grok the patient and then make it easy to do the right thing. We must intentionally build our humanity into this future.

AI plus Human: strategies in Radiology (RSNA)

Radiologists, the vanguard of medical AI integrators, are working out strategies for optimal workflow between humans and AI. See what they propose.

https://pubs.rsna.org/doi/10.1148/radiol.250477

This is a great article about human/AI partnerships in radiology. One of the authors is Eric Topol.

We know several things so far about AI/human work in medicine

  • Humans can develop automation bias (radiologist performance gets worse when given suggestions by a poor-performing AI) because they are influenced by AI reading
  • Humans can improve performance if paired with high-performing AI
  • AI outperforms humans when it has very high confidence of “normal” or “abnormal”
  • We can reduce the burden of human work if we can put AI in places where it does best.

So, the authors suggest AI/Human role separation framework:

  • AI-first model (have the AI comb the chart for relevant data before the radiologist reads the study)
  • Human-first model (have the human read the study and the AI takes the direct read and writes the Impression, or the AI takes the human report and writes a patient-friendly report)
  • Case Allocation model 1: Rule out Normal (no human read if AI highly confident that study is normal)
  • Case Allocation model 2: Risk Based Allocation (low to intermediate risk cases only had a single AI reader, otherwise humans also read the higher risk cases)
  • Case Allocation model 3: Dynamic Complexity Based Allocation (if AI is highly confident of normal or highly confident of abnormal, use that to categorize work. Doing so reduces human work by 66% and reduces false positives by 25% while keep case detection the same

Really promising developments in thinking about the Human/AI partnership. My ongoing worries about automation complacency and bias are still there, but I like that smart people are thinking about possible solutions.

Talk and Ukulele: Redesigning Daily Workflow with Digital Tools to Improve Clinician Wellbeing by CT Lin

How’s your summer going? Here’s a talk I gave recently to the Indiana Hospital Association on our current work with generative AI. Yes, it is tough to keep up with the tech acceleration. Optimists and pessimists, there is room for both.

How are you redesigning workflow to improve clinician well-being?

Well, here’s a song about it: EHR Wonderland.

Oh, did you want to hear the actual talk too?

Here’s a talk I gave to the Indiana Hospital Association, describing our Desktop Medicine initiative, our work with Ambient Notes, Inbasket ART, and my worries about Automation Complacency.

Epic UGM 2025 FOMO generator #16. ‘COSMOS AI does not speak English, it speaks Events’ –Karen Wong MD

This is the quote of the meeting. From a chance meeting with Dr. Wong.

COSMOS AI preprint is available. My mind is blown and I want to know what this means. And I think I won’t know for awhile until the tech bubbles up to something I can get my head around.

https://www.alphaxiv.org/overview/2508.12104v1

I was chatting with Karen Wong Epic physician in the physician lounge in Voyager. I was bemoaning my trying to understand what COSMOS AI could do. Then she let this epigram loose:

COSMOS AI doesn’t speak English. It speaks Events.

This is perhaps the quote of the conference. Thank you Karen. And now, off to read the ARXIV article and pretend to understand it.

Dr. ChatGPT will see you now

How do generative AI tools do in giving unofficial medical advice?

https://www.wired.com/story/dr-chatgpt-will-see-you-now-artificial-intelligence-llms-openai-health-diagnoses/

This is a rising tide of patients using LLM’s, hallucinations and all, to get quick medical advice. Is it advisable? What do docs think?

I am ambivalent. These tools will continue to improve. How savvy are patients about using them with discretion?

If in the service of an upcoming visit with a doc: sure. If going it on your own, buyer beware! You get what you pay for…

XGM FOMO generator #13. Ambient notes early experience. MemorialCare, CA

MemorialCare in California is deploying Abridge and finding up to 90 minute savings in notes.

Thanks to MemorialCare for sharing your experiences. Sounds like the phased rollout is going well 30-100 providers at a time.

It even works with a Vietnamese conversation between doctor and patient, translated seamlessly into a finished progress note in English.

Then working out how best to design note templates, mandated central control vs customizable notes. Then considering how to consent patients, verbalize exam findings, capturing stories and quotes from busy grateful docs. Thanks for a great talk.

XGM FOMO generator #7 (Multicare’s Ambient notes: “Care I should be doing”)

Multicare shares that providers who save time from ambient notes put their time right back into looking at other parts of the EHR and taking care of other patient care tasks. Huh. Signal audit pajama time metrics don’t move much, but physician satisfaction soars. “I’m doing the work should be doing.”

 

 

XGM FOMO generator #6 (ambient notes have an ROI?!)

Thanks to Legacy’s CMIO Kelley Aurand, friend of the blog, for teaching us how to think about ROI for ambient notes. XGM’s PAC16.

 

Dr. Aurand’s team is about a year ahead of us on the ambient notes journey, and her team’s journey of scrabbling for funding, then seeking volunteers and finding some volun-told, then working out how to measure ROI. They are paving the path we will all have to travel. 

 

Survey data is very positive in favor of using ambient for notes. Sure, but our CFO’s want an ROI. 

Whoa.

And this is how the sausage was made. Thanks for showing us the way. 

 

PAC16 is a must watch. Lots more detail in the XGM slides and talk. Thanks to our smart, thoughtful friends at Legacy Health Care. 

AI assistance in grading proposals (HBR)

Here we start to delve into the nuances of how AI can help novices and experts when evaluating the merits of innovative proposals. What comes out of these partnerships?

https://www.library.hbs.edu/working-knowledge/dangers-of-deferring-to-ai

At MIT Solve, Harvard business professors wanted to know if a carefully prompted AI could help sort through 2000 proposals for innovation using subjective criteria.

  • Is an expert human plus AI better than expert human alone?
  • Is a novice human plus AI better than a novice alone?
  • Does a black box AI (yes/no) answer influence humans less than an AI answer with narrative explanation?

These are tricky questions and the article gives tricky answers. It seems that humans are more 12% more likely to defer to an AI, both experts and novices. Lots more detail in the article.

These are tricky times. Are we paying attention?

Sepsis, AI and the Centaur. Also a discussion of Automation Complacency at iPractise (CTL talk)

I-PrACTISE – Improving Primary Care Through Industrial and Systems Engineering

Thanks to Dr. Beasley at the University of Wisconsin-Madison’s lecture series iPractise:Improving Primary Care Through Industrial and Systems Engineering (I-PrACTISE)

I enjoyed speaking with the thoughtful group of clinicians, engineers. See the website above, my talk was on 3/14/25 and use the password on that same website to launch the video recording of my talk.

In brief:

  1. Predictive analytics, AI with challenging signal to noise requires us to reconfigure human teams to achieve our goals.
  2. Furthermore, automation effectiveness will always lead to human complacency.

Of course, we discuss a lot more than that. Lets keep the conversation going!