How might a Greek Chorus rescue humans from Automation Complacency?
PROBLEM: The muscle relaxant prescription
I have already seen automation complacency among our physicians, one of whom complained: “I prescribed a muscle relaxant and your system did not warn me about the drug interaction between relaxants and birth control pills. My patient had an adverse reaction because of you.”
Physicians using an EHR will occasionally receive warnings, such as Penicillin Allergy, or Duplicate Blood Thinner prescriptions, to avoid mishaps. Generally, these are well-received. On the other hand, most of the time, an EHR doesn’t pop-up an alert at all, because we are trying to keep our alerts to a minimum and reduce “alert-fatigue.” Too many alerts and doctors start ignoring ALL of them.
Automation Complacency occurs as physicians expect the EHR to catch all unexpected drug interactions from prescriptions, while suppressing all interactions that the physician already knows about.
Ideally, the EHR should “tell me what I don’t know, but don’t bother me with stuff I already know.” However, every physician has a different level of experience and knowledge. Yes, [See NYTimes: Arizona Safety Driver when the Uber self-driving car killed a pedestrian] Automation Complacency will lead physician users to expect the “right” level of assistance all the time. Thank you to all my fellow informaticists out there trying to build the perfect Goldilocks algorithm that is “just right” for the majority of physician colleagues. Seems like an impossible task. It IS an impossible task.
With one AI assistant, we run the risk of confirmation bias. If the AI is fast and correct most of the time, we may come to rely on it. Then, we lose our vigilance. If it is correct 99% of the time, why bother reading the generated notes or the generated advice as carefully?
On the other hand, if we have a team with different opinions, we have to think hard to make a decision. Why not take advantage of the cacophony? Different opinions encourage us look for the best path.
PROPOSAL: The AI Greek Chorus.
In ancient Greek plays, the Greek chorus is a group of performers who would comment on the action in the play or highlight underlying themes for the audience.
I propose building an AI version of this Greek Chorus. The Chorus would be comprised of AI assistants with different and sometimes adversarial viewpoints. I have already nicknamed the six characters in mine:
- Main Hero: trained on my data in the EHR, it behaves and sounds like me at my best.
- Patient Advocate: taking the patient’s viewpoint, it insists on maximizing satisfaction of the patient.
- Quality Maven: it gathers all the relevant quality of care metrics and ensures I meet all of them.
- F1 Driver: superefficient, its focus is on the fastest way for me to be done.
- Wise Sloth: is deliberately slow. Almost every patient gets better with time. Just wait. They’ll be fine.
- Evil Genius: a brilliant adversary, she points out flaws in my thinking and makes me consider alternatives.
My Greek Chorus would always be available, whispering in my ear, or hovering at the edge of my vision, suggesting ideas and actions. I would be the tie-breaker. The chorus would keep me on my toes and make me a better doctor.
For the muscle relaxer example:
- Main Hero: I set up the muscle relaxer prescription. Please review. Is this what you intend?
- Patient Advocate: double checking drug interactions. I found one in particular that worries me.
- Quality Maven: polypharmacy risk: more than 5 active medications puts the patient at risk of side effects
- F1 Driver: just sign it. Its fine. Move on.
- Wise Sloth: most muscle spasms get better on their own. Physical therapy could help. Don’t prescribe.
- Evil Genius: muscle relaxers have NO objective evidence of benefit. Just say No.
For an example of my seeing a clinic patient with atypical chest pain:
- Main Hero: Heart disease highly unlikely. Reassure the patient. Have them call in a week if not better.
- Patient Advocate: how about a prescription to help the patient feel better?
- Quality Maven: no harm in prescribing a daily aspirin until we figure this out. Might still be heart disease.
- F1 Driver: treat ‘em and street ‘em. It’s not heart disease.
- Wise Sloth: In this case, I agree with F1. No action needed here.
- Evil Genius: Dummy. This could be acid reflux, pleurisy or pericarditis. At least try some indomethacin.
Yuval Noah Harari, in his book Homo Deus, describes two competing philosophies of the future: Techno-Humanism, where algorithms evolve to serve human will and purpose, and Data-ism where algorithms supersede humans by being faster and better. Data-ism states that data and the algorithm is king. Perhaps humans built the first algorithms, but soon AI will be faster and better at it. Soon, humans won’t be able to keep up.
As a human, I find the first option much more palatable and the second option terrifying. To ensure we choose a human-centric path, we have work to do. As we adjust to life with effort-reducing assistants, where can we direct our efforts to make us better doctors.
Could we better team leaders? By building an AI Greek Chorus, we are confronted with varying opinions and must choose a best path. The best leaders I know are open to divergent opinions from their team. The active debate helps them make better decisions. Let’s do the same with an AI Greek Chorus. Let AI assistants and humans collaborate to promote human values, skill and joy.
The lesson learned
Build AI tools that adapt to their human users.
Seek efficiency, but don’t squash productive cognitive friction or diffuse mode thinking.
We are no longer lone wolf physicians whose creed is “I alone can fix this.”
Instead, a human-led AI Greek Chorus should be our next AI advance.
Try this yourself:
I hope you are as unsettled as I am. The coming years will be exciting and terrifying. At least as terrifying as the somewhat incorrect AI Greek Chorus images that Bing CoPilot generates with my amateur prompts.




One thought on “The Fix for Automation Complacency: The AI Greek Chorus”