https://pubs.rsna.org/doi/10.1148/radiol.250477
This is a great article about human/AI partnerships in radiology. One of the authors is Eric Topol.
We know several things so far about AI/human work in medicine
- Humans can develop automation bias (radiologist performance gets worse when given suggestions by a poor-performing AI) because they are influenced by AI reading
- Humans can improve performance if paired with high-performing AI
- AI outperforms humans when it has very high confidence of “normal” or “abnormal”
- We can reduce the burden of human work if we can put AI in places where it does best.
So, the authors suggest AI/Human role separation framework:
- AI-first model (have the AI comb the chart for relevant data before the radiologist reads the study)
- Human-first model (have the human read the study and the AI takes the direct read and writes the Impression, or the AI takes the human report and writes a patient-friendly report)
- Case Allocation model 1: Rule out Normal (no human read if AI highly confident that study is normal)
- Case Allocation model 2: Risk Based Allocation (low to intermediate risk cases only had a single AI reader, otherwise humans also read the higher risk cases)
- Case Allocation model 3: Dynamic Complexity Based Allocation (if AI is highly confident of normal or highly confident of abnormal, use that to categorize work. Doing so reduces human work by 66% and reduces false positives by 25% while keep case detection the same
Really promising developments in thinking about the Human/AI partnership. My ongoing worries about automation complacency and bias are still there, but I like that smart people are thinking about possible solutions.