Dr. Jeffrey Sippel, pulmologist at the University of Colorado School of Medicine, Anschutz Medical Campus
https://humanfactors.jmir.org/2023/1/e42739
At long last! Dr. Jeffrey Sippel, Dr. Tim Podhajsky and I have been thinking about this for a few years. Speech recognition engines have been improving steadily for years now, and I (CT) have been using it for over almost 2 decades.
Leaving aside the improvements with DAX express (look it up; will be interesting very soon, ambient listening of a 2 person medical interview with AI-generated progress notes), what could Speech Recognition in the exam room do with the physician-patient relationship?
Funding for studies
Here is both encouragement for my colleagues and admiration for those who have figured out how to get external funding for research or projects.
We do many studies like one this WITH NO FUNDING. Just doing our usual job, having a good idea and then applying a generous amount of elbow grease, weekend and evening hours spent thinking, designing, writing, submitting an “exempt IRB application” for a Quality Improvement Project. Then, once granted, figuring out how to beg, borrow, steal the tools, experts and resources needed to do the work, set up the right patient or physician/APP population, do a paper survey, collate, meet regularly to think and write up the results, then submit for publication.
The secret to success is: choosing simple questions to ask, projects that don’t require new expensive equipment, and then REGULAR WEEKLY OR MONTHLY MEETINGS to hold each other accountable to make slow steady progress on your collaboration. This is the roadmap for our success in this project (and many others).
I want to thank Holly Hockemeier for her generosity in donating several dozen Dragon Speech Microphones to our project to get it off the ground (Hi, Holly! Thanks to you and the Nuance team!).
What did we find in the study?
Short answer: in the report above with 65 patients across 3 medical offices, patients highly preferred having a physician use speech recognition (microphone in the exam room) to generate “assessment and plan” recommendations for the patient, that they would take home with them.
This is the first report of using Speech Recognition in this way in the exam room. Do you use Speech Recognition in your practice? Then, WITHOUT the FANCY DAX (Dragon Ambient eXperience) “tools of the future”, you can do what we did TODAY.
Trouble with the History of Present Illness
Over the years, I tried quite a few techniques with speech recognition. First, I tried using speech recognition throughout an entire patient visit. This is how the intro conversation went:
Me: “Hi Sarah, good to see you again. Since you’re a technology nerd like me, while we are talking today I’m going to try using my speech microphone during the visit. If this works, I can print you a copy of my note that I write about you today. Would that be okay?”
Sarah: “Sure! Sounds interesting”
Sarah would tell me her medical symptoms over the past few months since our last visit. This time, I would then summarize her symptoms while looking at the computer screen. I ran into trouble right away:
- The speech engine was slow (2011) and was 2-3 seconds returning text to the screen. So while I was dictating, it was typing about 1/2 sentence behind. I found I could NOT look at the words appearing on the screen and also compose the rest of the sentence at the same time.
- Summarizing the patient’s history right after she spoke it was increasing the time it took to record a history. Normally I would make cryptic notes on paper (or type into the computer) while listening. Now, I would have to tell the patient “Stop Talking. It is my turn to tell the computer something.”
- Worse, if I was typing on the computer (yes I do touch-type), I was looking at the screen instead of the patient. Significant disconnect for eye-contact during the medical interview.
- Even worse, for less tech-savvy patients, I would say something like “Mrs. Jones states her sister has diabetes” and she would interject “No, not my sister, my cousin.” I would then have to sternly remind the patient: “When I am talking to the computer YOU HAVE TO BE QUIET NOW.”
And then Dragon would type on the screen “Mrs. Jones states her sister has diabetes no not my sister my cousin.”
Conclusion: not great for HPI.
Physical Exam and Speech
What would happen as I tried to do the physical exam with speech recognition in the exam room? I moved the patient to the exam table, do my exam. Not having a fancy bluetooth microphone, I could not speak as I was doing my exam. Then I’d have to return to the computer to document my findings. Not terrible, but not great, and took more time. Also, speech tools were generally slower than using a Macro to select mostly normal findings in the EHR.
Conclusion: not great for physical exam.
What about the Assessment and Plan?
Well, then, is Speech Recognition good for ANYTHING in the exam room? It turns out, YES. By the time we get to A/P, I’m usually doing a monologue for the majority of this time, for example “Here’s what I heard, here’s what I’m thinking, here’s what I propose we do, what do you think?”
Assessment and Plan in the exam room IS A SLAM DUNK
With some revision over the years, here’s what I say now (with ALL CAPS indicating the Speech Recognition COMMANDS):
Me: “Mrs. Jones, I’m going to talk to the computer now, while I speak to you about my thoughts. It is going to sound a little funny, but follow with me and and I’m going to ask if it makes sense to you. If you agree, I’ll print a copy for you to take home.”
Mrs. Jones: “Okay.”
Me:
“Assessment and Recommendations – COLON – NEW LINE
NUMBER ONE PERIOD High blood pressure PERIOD Your blood pressure looks great today PERIOD It is 122/70 today and the losartan is not causing any side effects PERIOD Please continue taking one every day and check your blood pressure reading once a week and write it down for us to review in 3 months PERIOD”
“MICROPHONE OFF” (to patient) Does that make sense? Any questions?
Mrs. Jones: “And losartan doesn’t cause cough, like the lisinopril before, so that’s good, right?”
Me:
“MICROPHONE ON I’m glad that the losartan is not causing cough PERIOD This is why we switched to this medicine, which seems to be working well PERIOD”
Wow factor
This way, I give the advice ONCE.
The computer hears and transcribes immediately.
I maintain eye contact with the patient while speaking.
The patient can correct me as we go.
The patient is hearing reflective listening and hearing the plan out loud.
I can print and hand the patient the summary immediately.
The patient is astounded at the on-screen typing he/she can watch live.
Often, patients will bring their last summary and indicate what they have accomplished since last visit. Win-win.
Advanced skill: Mini SOAP notes
In fact, now I have adjusted to include a brief HPI and exam right into the A/P, making tiny SOAP notes for each separate problem like this:
“MICROPHONE ON Assessment and Recommendations COLON NEW LINE NUMBER ONE High blood pressure PERIOD Patient states she started on losartan 1 month ago and has had no side effects PERIOD Her potassium and creatinine test was normal yesterday and her blood pressure today is 122/70 PERIOD NEW LINE ALL CAPS Plan COLON NEW LINE Please continue losartan PERIOD I am glad it has not caused cough and that the blood pressure looks great PERIOD Keep checking your BP at home once a week and lets see you again in 3 months PERIOD Let me know via the patient portal if you have any questions or concerns. MICROPHONE OFF.
Grace notes
- Having a mini SOAP note for each diagnosis works great, like a tiny capsule summary for each separate condition.
- If you are using speech recognition DO NOT LOOK AT THE SCREEN. Cognitive interference will ruin your sentence composition
- If you don’t look at the screen, LOOK AT THE PATIENT. Better eye contact!
- If you are looking at the patient MIGHT AS WELL DICTATE IN THE SECOND PERSON. Not: “The patient will …” but “You and I have agreed to …” In the beginning, colleagues objected “This seems unprofessional to talk about ‘you’ and not ‘the patient’.” to which my argument is “Who is the most important customer for your progress note?”
- Yes, proofread your speech, but EVEN IF YOU DON’T the patient heard you in-person and can still understand what was said. In 10 years of speech rec in the exam room I have only had ONE PATIENT bring back a print out and ask for a correction.
CMIO’s take? Come join us! Try Speech Recognition in the Exam Room. There is NO DOWN SIDE. Only time savings for physicians/APP’s, better reflective listening for patients and more timely documentation.
SIDE NOTE ABOUT GOOGLE BARD —
“summarize this article in less than 280 characters”
GOOGLE BARD’S RESPONSE:
Patients have a very positive perception of speech recognition use in the exam room. A survey of 65 patients found that 86% agreed or strongly agreed that their provider addressed their concerns better when using speech recognition, and 73% agreed or strongly agreed that they understood their provider's advice better. Patients were also more likely to recommend a provider who used speech recognition. This study suggests that speech recognition can be a valuable tool for improving patient care. It can help clinicians to take better notes, which can lead to more accurate diagnoses and treatment plans. Additionally, it can help patients to feel more involved in their care, as they can see the notes being taken and ask questions about them.