Can Learning Machines Unlearn? (wired.com)

https://www.wired.com/story/machines-can-learn-can-they-unlearn/

How much data?

I’ve been thinking about this a lot. In our recent work designing predictive algorithms using linear regressions and neural networks, and similar approaches, we’ve discussed the use of EHR (electronic health record) data, and have had some success using such algorithms to reduce deaths from sepsis (blog post from 10/6/2021).

One of many problems, is “how much data?” And it has been interesting to work with our data science colleagues on creating a model, and then carefully slimming it down so that our models can run on smaller data sets, more efficiently, more quickly, with less computing power.

Forgetting?

A related problem is “when do we need to forget?” EHR data ages, the way clinicians record findings can change. Our understanding of diseases change. The diseases themselves change. (Delta variant, anyone?)

Will our models perform worse if we use data that is too old? Will they perform better because we gave them more history? Do our models have an “expiration date?”

The Wired.com article above talks about having to remove data that was perhaps illegally acquired, or perhaps after a lawsuit, MUST be removed from a database that powers an algorithm.

Humans need to forget. What about algorithms?

Isn’t human memory about selective attention, selective use of memory? Wouldn’t a human’s perfect memory be the enemy of efficient and effective thinking? I’ve read that recalling a memory slightly changes the memory. Why do we work this way? Is that better for us?

Is there a lesson here for what we are building in silico?

CMIO’s take? As we build predictive analytics, working toward a “thinking machine”, consider: what DON’T we know about memory and forgetting? Are we missing something fundamental in how our minds work as we build silicon images of ourselves? What are you doing in this area? Let me know.

Sepsis, Machine Learning and the Centaur (my SMILE conference talk)

Find out: What is a centaur and what does it have to do with healthcare? What are the criteria for a good machine learning project? What is the role of a virtual health center with predictive models? And most importantly: What ukulele song goes with machine learning?

Here are the slides for my talk given at SMILE (Symposium for Machine learning, ImpLementation and Evaluation). The slides are mostly self-explanatory. You can also watch my talk at YouTube. Here is a PDF of the entire deck.

Making clinicians worthy of medical AI, Lessons from Tesla… (statnews)

Novel idea: ensure docs KNOW how to operate AI (!) (image: ETHAN MILLER/GETTY IMAGES, via Statnews)

Here is a different take on AI in healthcare: train and only allow clinicians who understand the limitations of AI, to use AI. Make savvy clinicians better. Don’t give it to all clinicians.

This is a throwback to our experience with Dragon Speech recognition over the past decade: DON’T give Dragon speech to a clinician struggling with computer use; instead, give Dragon to a clinician who is computer-savvy and understands the limitations of Dragon.

But, (in the early years) give the non-computer savvy clinician an “opt out” to dictate their notes by dictaphone or telephone, and gradually bring them along.

Having given several non-computer savvy docs access to Dragon in those early years, our hair stood on end when we ended up reading their notes later: they were clearly NOT proof-reading their work and assuming the Dragon engine was perfect at transcription.

Back to the future.

CMIO’s take? Be careful out there, everyone, both on the road with Tesla, and in healthcare with AI.

An AI-human bill of rights?

https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/

Read the Wired.com article. In brief, it outlines the emerging risks of relying on AI (artificial intelligence) tools that can unintentionally create bias and other consequences.

This is a nascent class of technology that, at its root, is often a black box: what is inside, is opaque to us, the users, and often to us as the designers.

I feel this critique personally. Having participated in the design of several AI tools in healthcare, I worry that, although we do our best, we don’t know what we don’t know.

CMIO’s take? I have no “best practice” lessons to impart here, on bias and the unknown. Do you? Please share. This is a big mountain we are about to climb, and we need to help each other.

Predictive Analytics or Predictive Shrub?

What can informatics learn from a plant?

https://www.wired.com/story/the-humble-shrub-thats-predicting-a-terrible-fire-season/

The chamise plant in California is a harbinger of a high risk fire season this summer. Fascinating analysis of ecology-based prediction.

CMIO’s take: What can informatics learn from a plant? Sometimes simple methods can be effective.

%d