Predicting Sepsis and Virtual Health Center at UCHealth: News. Colorado Sun

Saving lives at UCHealth: a combination of predictive analytics (AI) and a dedicated team: the Virtual Health Center nurses. Come see how the sausage is made (kinda cool)

 

Can AI improve health care? Doctors at UCHealth are trying to find out.

Thanks to John Ingold and the Colorado Sun for highlighting our ongoing work to defeat sepsis at UCHealth using predictive algorithms and the Virtual Health Center (VHC). I appreciate my colleague Amy Hassell for the outstanding team she leads in this work.

Together we have reduced mortality, by the equivalent of 800+ lives saved per year from sepsis and other in-hospital deteriorations.

We have moved our internal process. We began in 2018 by showing everything to the bedside team. No change in outcomes.

Then we put the Virtual Health Team as a back-up service to the primary team. Slight improvement (200 more lives saved per year over baseline).

Now, we have the Virtual Health Team as primary service, both detecting deterioration and taking direct action, with the patient’s primary bedside team in the background. This dramatically improves speed and consistency of response to a complicated disease requiring a coordinated approach: now 800+ more lives saved per year from in-hospital deterioration.

We are happy with our internal improvements and are always hungry for more opportunities. Thanks to Amy and the amazing VHC.

TikTok Education Strikes Again: Be a Haiku Hero!

I have always wanted to wear a cowboy hat, pink with flashing lights. And now, my dream has come true. Here is our latest 60-second education video on how to configure your Epic Haiku iPhone secure chat settings for success.

I am getting into ultra-short form education. One minute to jam-pack a bunch of ideas into a quick (and hopefully entertaining) video. Here’s how I built this:

  1. Set yourself an unrealistic expectation to teach sophisticated Secure Chat settings to physicians / APP’s in 60 seconds.
  2. Broach the TikTok video idea to disbelieving and pessimistic informatics colleagues
  3. Turn off the computer, clear your desk, take out a yellow paper pad and sketch out a storyboard for how this will go, six frames to a page. Sit with head-in-hands, thinking “Make it shorter! Make it funnier! But How?!”

 

  1. From that, write up 5-second video scripts and clothing and prop requirements for each video
  2. Text your disbelieving colleagues at the last minute, the morning of your next Large PIG (physician/APP informatics group) in-person meeting to ask for a patient gown, scrubs, white coat, other stuff like a cowboy hat, jack-in-the-box, whatever they can steal from their kids that morning.
  3. Run the (exhausting for an introvert) 2-hour Large PIG meeting in the boardroom and then recruit a handful of colleagues to stay late and shoot 5-second videos until the shot-list is done.
  4. Block out the calendar for an entire day of struggle, fire up Final Cut Pro on the laptop, open YouTube to learn how to use Final Cut, and stumble your way to an amateur production of BE A HAIKU HERO.
  5. Show your colleagues the result. Ignore the fake vomiting noises and keep going anyway.
  6. Ask Epic Wisconsin nicely for permission to use a screenshot and post it to on Youtube and the blogosphere.
  7. Apologize to your marketing department for the completely amateurish nature of the resulting video.

See? Totally simple.

Here you go.

Will Your Next Doctor Be … A Bot? (SunFest) with bonus uke song

What happens when you put a news reporter, and AI researcher, a Bioethicist and a CMIO together to discuss AI, Chatbots, Bias and emerging trends? You get this highly interactive and entertaining panel. And maybe a song.

Thanks to the Colorado Sun, and XCEL Energy for sponsoring our panel on AI in Healthcare at SunFest, held in Denver on the Auraria Campus of the University of Colorado.

I very much enjoyed this conversation with my colleagues at the University of Colorado, including Dr. Casey Greene, Director of the Center for Health AI, Dr. Matthew DeCamp, Bioethicist at the Center for Bioethics, and practicing general internist.

Among other topics, we covered:

  • AI, Large Language Models and Chatbots, defined
  • Predictive analytics and how they’re different from Chatbot AI
  • The potential dark side of AI in healthcare
  • Using ChatGPT-like tools in summarizing electronic health records, in helping doctors write progress notes, and in helping physicians, physician assistants, nurse practitioners and nurses, reply to patients via online messages.
  • Risks of automation, including Automation Complacency
  • The risk of hidden bias in AI, and how that compares with existing bias in healthcare today
  • Future plans for AI in healthcare

Listen to the end for an updated version of “Hospital of the Rising Sun – Pandemic Edition” with me and my trusty ukulele.

SunFest 2023: Watch every session with Colorado politicians, expert panels and more

Advances in PGx (Pharmacogenomic or Drug-gene interaction) at UCHealth (guest bloggers Dr. Christina Aquilante and Dr. David Kao)

Pharmacogenomics is advancing quickly: we can warn prescribers in the EHR when patients have genomic variants that reduce medication effectiveness. We are going from screening populations (18,000 so far), to anticipatory screening for high risk patients (cancer center patients about to choose a chemotherapy). Cool.

Previously, at the Colorado Center for Personalized Medicine…  

In December 2021, our heroes (CCPM in partnership with UCHealth) began releasing clinical pharmacogenetic test results for CYP2C19 and SLCO1B1 to the Epic electronic health records for CCPM biobank participants.

Eighteen months later, our program has flown to new heights.  We have returned results to over 18,000 biobank participants, which have impacted the care of over 2,600 patients.  We have expanded our program to include an additional 5 PGx genes (DPYD, TPMT, NUDT15, CYP2C9, ABCG2), 4 of which went into production the last week of April.  Altogether, these genes impact the effects of 30 different medications ranging from antidepressants to anti-inflammatories to chemotherapies!   

  

Meanwhile, back at CCPM headquarters…  

Our heroes continue to return high impact genetic variants with potentially life-changing and life-saving impacts for biobank participants and just as importantly, their families.  Our biobank lab and genetic counselor team have returned results for around 30 of these genes to over 250 patients.  As a result of this effort, many patients have been referred to specialists for evaluation and monitoring to identify and treat any concerning conditions as early as possible. In many cases, participants’ siblings and even children are also being tested, often when they otherwise wouldn’t have, giving them the power to battle the villains of genetic disease.     

 Join us next time…  

When we begin performing clinical-first tests for chemotherapies used to treat certain kinds of cancer and medications to reduce the side effects of chemo.  This will be our biggest challenge yet, adding an additional 2 genes, including CYP2D6, which has the potential to affect over 20 medications that treat a host of different conditions. We will start returning non-PGx results to the EHR electronically as well and use invisible data science superpowers within the EHR to identify UCHealth patients most likely to benefit from pre-emptive pharmacogenetic testing. 

CCPM and UCHealth were leading the charge toward use of genetics for clinical care 18 months ago, and our program has grown exponentially since then.  Thank you for joining us for the next phase of our adventure!  

(Photo by Patrick Campbell/University of Colorado)

Christina Aquilante, PharmD
Professor, Department of Pharmaceutical Sciences
University of Colorado Skaggs School of Pharmacy and Pharmaceutical Sciences
Director of Pharmacogenomics, Colorado Center for Personalized Medicine

David Kao, MDAssociate Professor of MedicineDivisions of Cardiology and Biomedical Informatics/Personalized MedicineUniversity of Colorado School of MedicineMedical Director, Colorado Center for Personalized MedicineMedical Director, CARE Innovations Center, UCHealth

Podcast: Designing for Health: Do patients want to see their test results immediately? (hint: 96% say yes)

Come join us! Dr. Bryan Steiz, first author, Liz Salmi, Chief Patient Informaticist, and I discuss our recent publication on the subject of patients accessing their test results online BEFORE their doctor can inform them. This poses a host of gnarly questions that had no data, no answers … UNTIL NOW. 

 

https://t.co/Tlk1a9hj0c

 

AI and reinventing learning in health systems (Beckers podcast, with ME!)

Where Bobby Zarr and I discuss the future of learning in health systems, with AI embedded in tools like the ones in our Learning Assistant, what we internally brand our education from uPerform. 

https://www.beckershospitalreview.com/podcasts/podcasts-beckers-hospital-review/ai-and-the-future-of-ehr-training—with-uperform-118045831.html

 

Blowing up the training classroom; also putting Roadsigns on the Highway in the EHR

Aren’t you frustrated with EHR usability? Don’t you wish you could see just-in-time guidance in the EHR? If the US Highway system can put signs on the highway, why can’t we?

From Dall-E image creation via Bing.AI browser

AMIA Panel: Signs on the roadway with Dr. CT Lin and Dr. Ryan Walsh

I enjoyed our panel discussion, encompassing two related topics:

  1. Replacing the old model of onboard classroom training for new physicians/APP’s/nurses/MA’s/staff for 8 to 24 hours, with self-paced learning modules that follow simulation training and adult learning principles
  2. Hacking the EHR to insert tips and tricks just-in-time, right where we anticipate our EHR users (physicians, APP’s etc) to get stuck with more challenging tasks. Or as we call it, Putting Signs on the Roadway.

From Dall-E via Bing.AI

CMIO’s take? We have found success with our technology innovation partners, uPerform and Amplifire. Click the link to learn!

Automation Complacency, The Stepladder of AI in EHR’s, “Writing a note is how I think”. WHAT NOW?

A navel-gazing reflection on GPT, human cognitive effort, and the stepladder to the future. Where do YOU stand?

The image above generated by DALL-E embedded in the new BING, with the prompt “Doctors using a computer to treat patients, optimistic futuristic impressionistic image”. Wow. Not sure what the VR doctor coming out of the screen is doing.

Thanks to Dr. Brian Montague for prompting this post with his quote during a recent Large PIG meeting:

I find that I do a lot of my thinking when I write my progress note. If/when ChatGPT starts to write my note, when will I do that thinking?  — Brian Montague MD

That stopped me in my tracks.

We are so hell-bent on simplifying our work, reducing our EHR burden, we sometimes forget that this work is MORE than just pointing, clicking and typing.

It is also about THINKING. It is about assembling the data, carefully coaxing information and patterns out of our patients through skillful interview, parsimonious lab testing, and careful physical examination. It is how we, as physicians and APP’s, use our bodies and minds to craft an image of the syndrome, the disease: our hidden opponent.

Just like inserting a PC into the exam room changed dynamics, inserting GPT assistants into the EHR causes us to rethink … everything.

Pause to reflect

First, I think we should recall the technology adoption curve.

I fully acknowledge that I am currently dancing on the VERY PEAK of the peak of over-inflated expectations. Yes. That’s me right at the top.

Of concern, viewing the announcements this week from Google, Microsoft, and many others gives me chills (sometimes good, sometimes not) of what is coming: automated, deep-fake videos? Deep-fake images? Patients able to use GPT to write “more convincing” requests for … benzodiazepines? opiates? other controlled meds?

AND YET, think of the great things coming: GPT writing a first draft of the unending Patient Advice Requests coming to doctors. GPT writing a discharge summary based on events in a hospital stay. GPT gathering data relating to a particular disease process out of the terabytes of available data.

And where do we think physician/APP thinking might be impacted by excessive automation?

Automation Complacency

I refer you back to my book review of the book “The Glass Cage” by Nicholas Carr. As I said before, although this was written to critique the aircraft industry, I took it very personally as an attack on my whole career. I encourage you to read it.

In particular, I found the term “automation complacency” a fascinating and terrifying concept: that a user, who benefits from automation, will start to attribute MORE SKILL to the automation tool than it actually possesses, a COMPLACENCY that “don’t worry, I’m sure the automation will catch me if I make a mistake.”

We have already seen this among our clinicians, one of whom complained: “Why didn’t you warn me about the interaction between birth control pills and muscle relaxants? I expected the system to warn me of all relevant interactions. My patient had an adverse reaction because you did not warn me.”

Now, we have this problem. We have for years been turning off and reducing the number of interaction alerts we show to prescribers precisely because of alert fatigue. And now, we have complaints that “I want what I want when I want it. And you don’t have it right.” Seems like an impossible task. It IS an impossible task.

Thank you to all my fellow informaticists out there trying to make it right.

GPT and automation: helping or making worse?

Inserting a Large Language Model like GPT, that understands NOTHING, but just seems really fluent and sounding like an expert, could be helpful, but could also lull us into worse “automation complacency.” Even though we are supposed to (for now) read everything the GPT engine drafts, and we take full ownership of the output, how long will that last? Even today, I admit, as do most docs, that I use Dragon speech recognition and don’t read the output as carefully as I might.

Debating the steps in clinician thinking

So, here is where Dr. Montague and I had a discussion. We both believe it is true that a thoughtful, effective physician/APP will, after interviewing the patient and examining them, sit with the (formerly paper) chart, inhale all the relevant data, assemble it in their head. In the old days, we would suffer paper cuts and inky fingertips in this process of flipping pages. Now we just get carpal tunnel and dry eyes from the clicking, scrolling, scanning and typing.

Then when we’ve hunted and gathered the data, we slowly, carefully write an H/P or SOAP note (ok, an APSO-formatted SOAP note). It will include the Subjective (including a timeline of events), Objective (including relevant exam, lab findings), Assessment (assembly of symptoms into syndromes or diseases) and Plan (next steps to take).

During this laborious note-writing, we often come up with new ideas, new linkages, new insights. It is THIS PIECE we worry most about. If GPT can automate many of these pieces, WHERE WILL THE THINKING GO!?! I do not trust that GPT is truly thinking. I worry that the physician will instead STOP THINKING.

Then THERE IS NO THINKING.

Is this a race-to-the-bottom, or a competition to see who can speed us up so much that we are no longer healers, just fast documenters, since we are so burned out?

Who will we be?

Radio vs TV vs Internet

My optimistic thought is this. Instead of GPT coming to take our jobs, I’m hopeful GPT becomes a useful assistant, sorting through the chaff, sorting and highlighting the useful information in a data-rich, information-poor chart.

Just like the radio industry feared that TV would put them out of business (they didn’t), and TV feared that the Internet would put them out of business (they didn’t), the same, I think, goes for physicians, established healthcare teams, and GPT-automation tools.

Lines will be drawn (with luck, WE will draw them), and our jobs will change substantially. Just like emergent (unpredictable) properties like “GPT hallucinations” have arisen, we must re-invent our work as unexpected curves arise while deploying our new assistants.

Another Bing-Dall-E image of physicians at a computer. In the future, a doctor will apparently have more legs than before.

A possible step-ladder

I think physician thinking really occurs at the assembly of the Assessment and Plan. And that the early days of GPT assistance will begin in the Subjective and Objective sections of the note. GPT could for example:

SIMPLE
  • Subjective: Assemble a patient’s full chart on-demand for a new physician/APP meeting a patient in clinic, or on admission to hospital, focusing on previous events in can find in the local EHR or across a Health information exchange network, into an easily digestible timeline. Include a progression of symptoms, past history, past medications.
  • Objective: Filter a patient’s chart data to assemble a disease-specific timeline and summary: “show me all medications, test results, symptoms related to chest infection in the past year”
  • Then leave the assessment and planning to physician/APP assembly and un-assisted writing. This would leave clinician thinking largely untouched.
MODERATE
  • Subjective and Objective: GPT could take the entire chart and propose major diseases and syndromes it detects by pattern matching and assemble a brief page summary with supporting evidence and timeline, with citations.
  • Assessment and Plan: Suggest a prioritized list of Problems, severity, current state of treatment, suggested next treatments, based on a patient’s previous treatments and experience, as well as national best practices and guidelines. Leave the details, treatment adjustments and counseling to physicians/APPs interacting with the patient. Like Google Bard, GPT may suggest ‘top 3 suggestions with citations from literature or citations from EHR aggregate data’ and have the physician choose.
DREAMY/SCARY
  • Subjective and Objective: GPT could take the Moderate tools, add detection and surveillance for emerging diseases not yet described (the next Covid? the next Ebola? new-drug-associated-myocarditis? tryptophan eosinophilia-myalgia syndrome, not seen since 1989?) for public health monitoring. Step into the scanner for full body photography, CT, MRI, PET, with a comprehensive assessment in 1 simple step.
  • Assessment and Plan: GPT diagnoses common and also rare diseases via memorizing 1000’s clinical pathways and best-practice algorithms. GPT initiates treatment plans, needing just physician/APP cosignature.
  • A/P: Empowered by Eliza – like tools for empathy, takes on counseling the patient, discovering what conversational techniques engender the most patient behavior change. Recent studies already indicate that GPT can be considered more empathetic than doctors responding to online medical queries.

CMIO’s take? First things first. While we can wring our hands about “training our replacements”, there is lots yet to do and discover about our newest assistants. Shall we go on, eyes open?

Virtual Reality: reliving the past for seniors? (nytimes)

Interesting that one of our innovation partners, Rendever, has developed a way for family members to record and annotate video to be viewed by seniors, so that they can see their hometown, where they grew up, where they worked, to reawaken pleasant memories of times past. An interesting, unanticipated way of using virtual reality.

The Xenobot Future Is Coming—Start Planning Now (wired.com)

“…the ability to recode cells, de-extinct species, and create new life forms will come with ethical, philosophical, and political challenges”

https://www.wired.com/story/synthetic-biology-plan/

With CRISPR, the molecular scissors technology ,we are gaining not only read, but WRITE access to our genetic data. Writing code will no longer be limited to computers (and electronic health records), but into living organisms. Are we ready? The technology is racing ahead of our ability to think about and deploy it for the good of all.

%d