With CRISPR, the molecular scissors technology ,we are gaining not only read, but WRITE access to our genetic data. Writing code will no longer be limited to computers (and electronic health records), but into living organisms. Are we ready? The technology is racing ahead of our ability to think about and deploy it for the good of all.
For 4000 patients, we now have data and reminder tools to notify clinicians of important drug-gene interactions at the time of prescribing.
by GUEST BLOGGERS: Christina Aquilante PharmD and David Kao MD
The Go Live
The morning of Wednesday, December 1, 2021, members from the Colorado Center for Personalized Medicine (CCPM), UCHealth IT, and BC Platforms teams surrounded their home computers, fixated on a Microsoft Teams channel. It had all the feels of a space shuttle launch. The teams had been working for five months to upgrade the CCPM Biobank pharmacogenetic (PGx) return of results pipeline. Today was the big day – CYP2C19 and SLCO1B1 PGx results were about to be returned to the UCHealth Epic electronic health record (EHR) for Biobank participants.
8:22 am. “Good morning! Happy go-live! Kristy Crooks, Biobank Laboratory Director, will be signing off the first plate at 8:30 am.” typed UCHealth Project Leader, Emily Hearst.
8:30 am. “Please post in the Teams chat when you sign off on the first plate. We know there will be a delay as the plate is being processed,” typed Emily Hearst.
8:32am. “Plate signed off. Not seeing a result in Epic yet,” typed Kristy Crooks.
8:36 am. “PGX molecular was resulted!” typed Kristy Crooks. A flurry of emojis followed.
8:37 am. “Yesssss!!! Strong work all!” typed CCPM Medical Director, Dave Kao.
The teams worked for the next few hours troubleshooting minor technical glitches and testing more plates.
12:21 pm. “We have success!” typed UCHealth Systems Architect, Katie Hess.
The Biobank that returns Clinical Results
The success of December 1st’s go-live was a culmination of years of hard work from many different teams. In 2015, CCPM partnered with UCHealth to establish the Biobank Research Study. As part of the study, UCHealth patients are asked to provide a blood or saliva sample for genetic research. There is also the potential to have clinically actionable results (e.g., PGx) returned to them and their EHR. Prior to 2021, PGx results had been returned for some Biobank participants but the return process was put on hold to upgrade some of the IT infrastructure. After an incredible team effort, the revised IT pipeline launched on December 1, 2021 and
almost 4000 Biobank participants have now had CYP2C19 and SLCO1B1 results returned to their UCHealth EHR and patient portal.
Christina Aquilante, PharmD
CYP2C19 is an enzyme that metabolizes medications such as citalopram, escitalopram, clopidogrel, proton pump inhibitors, and voriconazole. Due to genetics, approximately 60% of patients are not CYP2C19 normal metabolizers, which can influence medication efficacy and safety. SLCO1B1 is a protein that transports statins into the liver. Approximately, 28% of patients have decreased or poor SLCO1B1 transporter function. This can lead to an increased risk for statin-associated musculoskeletal symptoms.
Given that > 30 million Americans take statins annually, this seemingly small risk [genetic variant] can ultimately affect a lot of people.
Christina Aquilante, PharmD
The “Last Mile” Problem
The questions that get asked most often by clinicians are – How will I know if my patient is a Biobank participant? How will I know if they have CYP2C19 or SLCO1B1 results? What do I do with this information clinically? How often are these alerts going to interrupt what I’m doing?
The good news is that the CCPM and UCHealth teams have built clinical decision support tools to notify clinicians of important drug-gene interactions for Biobank participants at the time of prescribing. In other words – clinicians don’t need to look for it – the tools will tell them when it is important. Currently, PGx CDS tools are live across the UCHealth system for 17 medications affected by either CYP2C19 or SLCO1B1. These tools contain guidance for how to modify drug therapy based on the patient’s PGx results.
In the cable TV industry, this used to be called the “Last Mile” problem, where a cable company could build a terrific network of cable channels, underground cables and signal transmitters, and yet that “last mile” to the customer’s home, determines if the customer gets any benefit.
Importantly, the teams took great care when designing the CDS tools, and most of the tools are highly visible and yet non-interruptive in nature, i.e., they will not stop a clinician’s workflow. As of February 14, 2022,
301 drug-gene interaction alerts have fired in clinical practice for 268 Biobank participants.
David Kao MD
The most common alerts are for proton-pump inhibitors (PPIs), followed by es/citalopram, and then statins. The work to date is just the tip of the iceberg for the CCPM Biobank PGx return of results initiative at UCHealth. The team is in the process of preparing for another gene launch in early summer – this one for DPYD, which affects the chemotherapeutic agents 5-fluorouracil and capecitabine. Simultaneously, the teams are planning for the deployment of a Genomics Module in Epic and testing out new genotyping platforms with more extensive PGx variant coverage. When these pieces are in place, the sky’s the limit for PGx at UCHealth.
Christina Aquilante, PharmD, Professor Director of Pharmacogenomics, Colorado Center for Personalized Medicine
David Kao, MD, Associate Professor Medical Director, Colorado Center for Personalized Medicine
Find out: What is a centaur and what does it have to do with healthcare? What are the criteria for a good machine learning project? What is the role of a virtual health center with predictive models? And most importantly: What ukulele song goes with machine learning?
Here are the slides for my talk given at SMILE (Symposium for Machine learning, ImpLementation and Evaluation). The slides are mostly self-explanatory. You can also watch my talk at YouTube. Here is a PDF of the entire deck.
Novel idea: ensure docs KNOW how to operate AI (!) (image: ETHAN MILLER/GETTY IMAGES, via Statnews)
Here is a different take on AI in healthcare: train and only allow clinicians who understand the limitations of AI, to use AI. Make savvy clinicians better. Don’t give it to all clinicians.
This is a throwback to our experience with Dragon Speech recognition over the past decade: DON’T give Dragon speech to a clinician struggling with computer use; instead, give Dragon to a clinician who is computer-savvy and understands the limitations of Dragon.
But, (in the early years) give the non-computer savvy clinician an “opt out” to dictate their notes by dictaphone or telephone, and gradually bring them along.
Having given several non-computer savvy docs access to Dragon in those early years, our hair stood on end when we ended up reading their notes later: they were clearly NOT proof-reading their work and assuming the Dragon engine was perfect at transcription.
Back to the future.
CMIO’s take? Be careful out there, everyone, both on the road with Tesla, and in healthcare with AI.
This data dilettante (see previous posts: dilettante #1, dilettante #2) has enjoyed armchair theorizing with all of you, my best (online) friends. Today we explore how our super-smart team scrambled our way to improving sepsis care with a predictive algorithm we built.
The old saying goes: the success of any major project in a large organization follows the 80:20 rule. 20% of the work is getting the technology right, and 80% is the socio-political skill of the people doing the work.
We all underappreciate this fact.
It turns out that we spent months building a sepsis alert predictive tool, based on various deterioration metrics, and a deep analysis of years of our EHR data across multiple hospitals. We designed it to alert providers and nurses up to 12 hours BEFORE clinicians would spot deterioration.
We patted ourselves on the back, deployed the predictive score in a flowsheet row, and in the patient lists and monitoring boards, with color coding and filters, and stepped back to revel in our glory.
Turns out that our doctors and nurses were ALREADY FULLY BUSY (even before the pandemic) taking are of critically ill patients. Adding YET ANOTHER alert, even with fancy colors, did NOT result in a major behavior shift to ordering IV fluids, blood cultures, or life-saving antibiotics any quicker.
See the fancy patient-wearable tech on the left (Visi from Sotera, in this case), and one of our hardworking nurses, with ALL of our current technology hanging off her jacket and stethoscope. She should be the visual encyclopedia entry for “alert fatigue.” 😦
Back to the drawing board
As result of our failure, we huddled to think about transforming the way we provided care. It was time to disrupt ourselves. We decided to implement a Virtual Health Center, mimicking what we had seen in a couple places around the country: we deployed 2 critical care physicians and about a half-dozen critical care nurses on rotation, off-site at an innovative, award-winning Virtual Health Center.
This second time around, we created a cockpit of EHR data and predictive alerts to the VHC clinicians, who were dedicated to watching for deterioration across ALL our hospitals, and responding quickly. This does several things:
Takes the load off busy front line clinicians
Creates a calm environment for focused, rapid response
Dramatically improves the signal-to-noise ratio coming from predictive alerts
This way, the VHC nurses view all the alerts, investigate the chart, and contact the bedside nurse when the suspicion is high for sepsis, and start the sepsis bundle immediately.
Soon, by tweaking the ways our teams worked together, we were able to reduce the burden on bedside nurses and physicians and simplify handoffs.
See chart above: Before the VHC, bedside nurses were responsible for detecting sepsis (infrequent, subtle signals during a busy shift with lots of loud alarms for other things), with many ‘grey box’ tasks, as well as ‘magenta box’ delays.
After implementing the VHC, the VHC nurses took over the majority of ‘green box’ tasks, reducing the bedside ‘grey box’ work and completely eliminating ‘magenta box’ delays.
As a result, we have dropped our “time to fluids” by over an hour, and “time to antibiotics” by 20 minutes, which we estimate has saved 77 more lives from sepsis each year.
CMIO’s take? Predictive analytics, data science, machine learning, call it what you like. This is a paradigm shift in thinking that requires disrupting “business as usual” and is hard, but rewarding work. I can’t wait to see what we all can achieve with these new tools.
The TL;DR? 15 seconds should be the length of your educational videos. Wanna know why? and how? read on.
I was a Late adopter of Facebook
I’ve been thinking about the evolution of social media. In early days, I was a late adopter of Facebook, not getting why it was any better than email. Now, I get it: saying something once allows your network to see it, from close friends, to casual acquaintances. Medical residents explained to me that photos and memories were easier to share more broadly. AND, an existing large network made participation more valuable (hey! look at all the people I already know on here!).
Just like in the old days, getting a telephone was INCREASINGLY useful if there were MORE people and stores you could call. The network effect.
That led me over the years to LinkedIn (mostly for work contacts and posting my CV and work products publicly) and Twitter (still figuring it out, but a good way to keep up with news if you curate your network carefully, and also a way to post blog content). Also, Twitter allows you to curate for yourself an international community with similar interests, like #medtwitter.
And, my brilliant younger sister taught me that Twitter could also be good for lecture commentary and discussion (she will give a talk on 2 screens: one with her slides and another with a live pre-filtered Twitter feed: how brave! and give out a custom hashtag, like #postitpearls_lecture, and ask the audience to submit questions this way: wow).
And, some of you know that I’ve dabbled in amateur song-parodies with EHR songs on my youtube channel.
Finally, I’ve figured out how to blog regularly and then use IFTTT to cross-post my content auto-magically to my other platforms (Facebook page, Twitter, LinkedIn) so that I can seem more connected and omni-present than I really am (Thanks for another great tip, Sis).
BUT! TikTok is another thing altogether. My colleague and her daughter suggested that I take my latest Hamilton parody song (that I had gamely posted to YouTube and here I am shamelessly showing it to you again)
#notthrowinawaymyshot and now post it on TikTok, a post-millenial social media platform restricted to 60 second videos. Leaving aside the recent kerfuffle about Chinese ownership and control, this is qualitatively a different animal: getting your thoughts across in 15 seconds (preferred duration, and the time restriction being a result of the music industry’s maximum replay length of a copyrighted song). It has since been extended to 60 second maximum if you have an original soundtrack on your video.
So, I dove in. Unlike my “dozens” of views on my YouTube channel (with which I was satisfied; my broadcast domain is, admittedly to a relatively small physician informatics audience), my TikToks quickly blossomed to nearly 1000 views in 2 days.
Wow! I thought. I am AMAZING on TikTok.
What I did not appreciate is the 15 to 60 second format is much more attuned to the rapid “swipe” of post-millenials, and EVERYONE racks up lots of views. And, ultra-short videos are so easy to consume one after the other. AND, TikTok doesn’t need you to establish your network before your video gets out there; it shows your video to a random selection of viewers, and then those who LIKE it or SUBSCRIBE to you trigger the algorithm to show it to more viewers. So, an easy way to game the system is to use trending (but highly inaccurate) hashtags, like #superbowl, etc. Sadly, this user does not seem to have understood, or be willing to follow, some of these informal rules.
Furthermore, if you read online chatter about TikTok views “500 views total, is pretty sad; what you want is 500 views per hour.” For example, Nathan Evans, of Sea Shanty fame? He went viral at about 250,000 views, and now he’s at 12.9 million. Oh, well. Here’s my paltry Covid Sea Shanty, currently at 62 views (not 62,000) and SIX LIKES.
In contrast, our Informatics team at UCHealth just retired/deleted a 17-minute video I made a 10 years ago for a full “walkthrough” of how to use the Electronic Health Record for our ambulatory clinic physicians. Whew, how out of touch was THIS guy? Here’s a one minute snippet of the kind of video I posted back then, when we were on Allscripts Touchworks. So young, so naive.
Our more recent training videos are more like 1-2 minutes and focused on ONE technique or tool. Now, I’m thinking, maybe we need to shoot for 15-30 seconds. The cool thing about TikToks is that you can trim seconds, speed things up, because those viewers who “get it” can be done watching in 15 seconds, but the video can be paused and also it automatically replays so the viewer can catch subtle details. Hmm, is this a paradigm shift? Should we embed TikTok length education videos into our EHR?
Put Road Signs On the Roadway
As we say internally, shouldn’t we put the Road Signs and Driving Directions (our tips and tricks) on the Roadway (where our users are actually using the EHR) and not in the Garage (our online reference library and training webinars)? Aren’t our users more likely to click on tips WHEN they’re doing work, rather than when “oh, I have some time, let me see what I can go learn.” (which is never)
Austin Chang is my hero
There clearly is an entire evolution of thinking needed to succeed in this TikTok medium. And I don’t have the savvy (yet), the luck, or the persistence to grind out the many tries needed to break through. However, there are medical professionals who have. For example, Austin Chang.
Austin is … well, just go watch him. In 15 seconds, with hilarious music over-dubs, he uses captions and terrible dancing while in scrubs (ok not so terrible), to get his medical facts out there.
I both bemoan the general public’s deterioration of attention span (15 seconds now? Really?) and his ability to fit his tiny education bites (bytes?) into this format. It works. Some of his TikToks are over 2 million views. On MEDICAL TOPICS. Nice. Here’s the NYTimes writing about him.
This reminds me of reading The Shallows, a book about what the Internet is doing to our brains. Are we losing the ability to read a book? I don’t know. I, for one, did not finish reading the book. Ironic.
CMIO’s take: Beat ’em or Join ’em? What are YOU doing about TikTok in your field?
I love stories like this. Jimmy Choi has a TikTok page where he documents his athleticism. He also has Parkinson’s Disease, with an uncontrollable shaking in his arms. At one point, he complained about how difficult it is for people with Parkinsons to take their medications; the shaking often completely spills the pills from the bottle.
As a result, a community of TikTokkers began brainstorming and then modeling and then 3-D printing an innovative pill bottle design that ensures only ONE pill is dispensed at a time.
CMIO’s take? Having access to the brain power and creative energy of the world, via communication technologies like TikTok and other Social media tools, is, I think, a wonderful antidote to our recent experiences, and the best expression of humanism. How can we design to augment this, the better angels of our nature?