Infinite AI Interns for Everybody (Wired.com)

https://www.wired.com/story/artificial-intelligence-labor/

Here is our next AI challenge, as our jobs undergo gradual transformation. How will we as knowledge workers in informatics accommodate the growing sophistication of narrow AI assistants? Scheduling appointments, helping with spelling and grammar, now writing fluid manuscripts based on the library we point them to?

Teachers are starting to change “take home” assignments to in-class writing with no internet connection, to ensure students don’t outsource their thinking/writing to an AI chatbot.

What will we do? Can an AI replace us in writing the “one-pager” that summarizes thinking and succinctly and convincingly makes the case for change? Is Machiavelli vulnerable to being toppled? Do we no longer have need for governance and leadership if we can outsource thinking and fluency to an AI? Will my AI go up against your AI in the battle for mindshare?

Yes.

CMIO’s take? Yet again, time to re-imagine our jobs with the tools we have at hand.

META introduces AI-generated video

Another take on ‘AI is coming for you and your job’

https://www.technologyreview.com/2022/09/29/1060472/meta-text-to-video-ai/

Well, you thought AI-generated static images were both cool and concerning. Now, META (formerly Facebook) introduces AI-generated VIDEO based on simple user inputs. If distinguishing fake from real was hard before, where is this going?

DALL-E Mini Is the Internet’s Favorite AI Meme Machine (Wired.com)

https://www.wired.com/story/dalle-ai-meme-machine/

How can this be real? Read the story above at Wired.com.

I typed “Elephants breakdancing at midnight” into the prompt, and seriously, about a minute later I get this on my screen.

Let’s not go into why that sentence came out of my head thru my fingers, and instead focus on the technology. There is an AI, with the internet as infinite visual resource, that can now take brief text prompts and then render them for your viewing pleasure.

This is mind-blowing. Here’s “Frolicking Flying Cars”

Here’s “A family of dolphins using iPhones in the style of Picasso”

Here’s “Speed skating in the style of a Chinese landscape painting”

 

What is real? What is imaginary? Who drew this? Try it yourself at http://www.craiyon.com !

A.I. Is Mastering Language. Should We Trust What It Says? (nytimes)

GPT-3 can write text that astounds, based on its Large Language Model. Things are happening, people. Are we paying attention? #hcldr #hitsm #whyinformatics

I’m always on the looking for developments in computing outside of healthcare. This is a longer read, but so thought provoking:

  • What is a Large Language Model and why is it only recently important?
  • What is GPT-3 and what are all these magical things it supposedly does?
  • Can GPT-3 digest 1000 progress notes of a patient chart, say, and write a cogent 1-page summary for a clinician to digest rapidly? I’d pay for THAT.

‘‘The underlying idea of GPT-3 is a way of linking an intuitive notion of understanding to something that can be measured and understood mechanistically,’’ he finally said, ‘‘and that is the task of predicting the next word in text.’’ 

Prompt the algorithm with a sentence like ‘‘The writer has omitted the very last word of the first . . . ’’ and the guesses will be a kind of stream of nonsense: ‘‘satellite,’’ ‘‘puppy,’’ ‘‘Seattle,’’ ‘‘therefore.’’ But somewhere down the list — perhaps thousands of words down the list — the correct missing word appears: ‘‘paragraph.’’ The software then strengthens whatever random neural connections generated that particular suggestion and weakens all the connections that generated incorrect guesses. And then it moves on to the next prompt. Over time, with enough iterations, the software learns.

Ilya Sutskever

There is all this discussion of “is this a sophisticated parrot” or “truly an artificial intelligence capable of generating new ideas.” Well, in our Electronic Health Record world, just the first item would be transformative, if we can get an AI to digest a hyperobject large set of data into an executive brief. Just that.

CMIO’s take? This is an important article by Steven Johnson in the New York Times Magazine. Watch this space; the development of GPT-3 heralds a qualitative improvement in AI language models; so much so that we feel compelled to teach it values and culture lest it start spewing hatred it learns on the internet. This is a worthwhile long read.

Can Learning Machines Unlearn? (wired.com)

https://www.wired.com/story/machines-can-learn-can-they-unlearn/

How much data?

I’ve been thinking about this a lot. In our recent work designing predictive algorithms using linear regressions and neural networks, and similar approaches, we’ve discussed the use of EHR (electronic health record) data, and have had some success using such algorithms to reduce deaths from sepsis (blog post from 10/6/2021).

One of many problems, is “how much data?” And it has been interesting to work with our data science colleagues on creating a model, and then carefully slimming it down so that our models can run on smaller data sets, more efficiently, more quickly, with less computing power.

Forgetting?

A related problem is “when do we need to forget?” EHR data ages, the way clinicians record findings can change. Our understanding of diseases change. The diseases themselves change. (Delta variant, anyone?)

Will our models perform worse if we use data that is too old? Will they perform better because we gave them more history? Do our models have an “expiration date?”

The Wired.com article above talks about having to remove data that was perhaps illegally acquired, or perhaps after a lawsuit, MUST be removed from a database that powers an algorithm.

Humans need to forget. What about algorithms?

Isn’t human memory about selective attention, selective use of memory? Wouldn’t a human’s perfect memory be the enemy of efficient and effective thinking? I’ve read that recalling a memory slightly changes the memory. Why do we work this way? Is that better for us?

Is there a lesson here for what we are building in silico?

CMIO’s take? As we build predictive analytics, working toward a “thinking machine”, consider: what DON’T we know about memory and forgetting? Are we missing something fundamental in how our minds work as we build silicon images of ourselves? What are you doing in this area? Let me know.

Sepsis, Machine Learning and the Centaur (my SMILE conference talk)

Find out: What is a centaur and what does it have to do with healthcare? What are the criteria for a good machine learning project? What is the role of a virtual health center with predictive models? And most importantly: What ukulele song goes with machine learning?

Here are the slides for my talk given at SMILE (Symposium for Machine learning, ImpLementation and Evaluation). The slides are mostly self-explanatory. You can also watch my talk at YouTube. Here is a PDF of the entire deck.

Making clinicians worthy of medical AI, Lessons from Tesla… (statnews)

Novel idea: ensure docs KNOW how to operate AI (!) (image: ETHAN MILLER/GETTY IMAGES, via Statnews)

Here is a different take on AI in healthcare: train and only allow clinicians who understand the limitations of AI, to use AI. Make savvy clinicians better. Don’t give it to all clinicians.

This is a throwback to our experience with Dragon Speech recognition over the past decade: DON’T give Dragon speech to a clinician struggling with computer use; instead, give Dragon to a clinician who is computer-savvy and understands the limitations of Dragon.

But, (in the early years) give the non-computer savvy clinician an “opt out” to dictate their notes by dictaphone or telephone, and gradually bring them along.

Having given several non-computer savvy docs access to Dragon in those early years, our hair stood on end when we ended up reading their notes later: they were clearly NOT proof-reading their work and assuming the Dragon engine was perfect at transcription.

Back to the future.

CMIO’s take? Be careful out there, everyone, both on the road with Tesla, and in healthcare with AI.

An AI-human bill of rights?

https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/

Read the Wired.com article. In brief, it outlines the emerging risks of relying on AI (artificial intelligence) tools that can unintentionally create bias and other consequences.

This is a nascent class of technology that, at its root, is often a black box: what is inside, is opaque to us, the users, and often to us as the designers.

I feel this critique personally. Having participated in the design of several AI tools in healthcare, I worry that, although we do our best, we don’t know what we don’t know.

CMIO’s take? I have no “best practice” lessons to impart here, on bias and the unknown. Do you? Please share. This is a big mountain we are about to climb, and we need to help each other.

%d bloggers like this: