https://www.wired.com/story/ubers-fatal-self-driving-car-crash-saga-over-operator-avoids-prison/
This made news a few years ago: a self driving Uber, under testing with a monitoring person in the driver seat, hit and killed a pedestrian (who was not as a crosswalk).
The monitoring driver plead guilty to reckless endangerment and will avoid prison time.
This reminds us of the constant and growing influence of AI and automation on our daily lives. We are all less vigilant when an assistant gets really good. Maybe 99% effective, maybe 99.9, 99.999, like with self-driving vehicles. What happens to the 0.001%?
Recently I was criticized by a medical colleague because “I wrote a prescription for a muscle relaxer, and it caused a drug interaction with the patient’s birth control medication. Epic did NOT stop me, and it should have.” The implication was, that it was Epic’s fault, and thus, those who configure Epic (CT Lin and his henchmen).
CMIO’s take? Classic automation complacency. We give the automation power over our daily lives and we stop watching carefully. Have you seen this in your work? Let me know.
Did you counsel your colleague that muscle relaxers don’t work and the fact that we have them available as an available option at all is the real failing rather than the EHR notification of the interaction (if one could have been known). The best reminder is the one that is never needed in the first place.
Indeed. But that is another blog post. Perhaps by a guest blogger …