One of our book club books, for the ‘clinical decision support’ team for the electronic health record at our institution. We have now read it in our Large PIG book club meeting (the Physician Informatics Group: we try hard not to take ourselves too seriously). Some of us were depressed after reading. The initial optimism of the ‘glass cockpit’, the fancy new computerized design of the complex Airbus aircraft, are instead proving to be a ‘glass cage’, which isolates us and anesthetizes us from the real world. The author provides riveting examples of glass cages: the Inuit who lose their cultural skills of navigating brutally inhospitable landscapes because of GPS and snowmobiles, also, the pilots who make error because of automation, leading to automation bias and automation complacency: thinking the computer must be right, and the computer will know, so I don’t have to. Further, our attention wanders as we cede responsibility for moment to moment control of the task. How do we fight such a trend and temptation, as designers?
Yet the author speaks about ‘adaptive automation’ where a computer could detect the cognitive load or stress in a human partner, and share the cognitive work appropriately. He speaks of Charles Lindbergh, describing his plane as an extension of himself, as a ‘we.’ Can we aspire to improving the design of our current electronic systems to such a partnership that avoids the anesthetic effect and instead becomes more than the sum of the partners? Chess is now played best by human-computer partners; could health care and other industries be the same? And what could that look like? The Glass Cage gives us an evidence-based view into that future (and hopeful) world.
UPDATE: We had a great discussion during our recent book club. As an indicator, several of my colleagues told me: “I don’t like this book.” Perfect! It made for a juicy, spirited conversation about the benefits and risks of automation and how the stories in the book did or did not apply to healthcare and what we were building. Maybe we can consider “adaptive automation” so that the computer scales up and down its assistance as the clinician comes under crisis so that the human can focus on problem solving and the computer can increasingly assist with routine tasks. And then, we need to take care that “automation complacency” does not increase. We already have heard of clinicians saying “Well, EHR did not pop up an alert for a drug interaction, so that means it must be safe to prescribe this new med for this patient.” Whoa, are we giving away the primacy of our own training and experience to an algorithm already?
CMIO’s take: keep reading, keep learning. It is only through extensive experience from reading and books that we can learn from others in healthcare, and from others in other industries divergent from our own. There are more smart people who DON’T work for you, than who do.