I’m always on the looking for developments in computing outside of healthcare. This is a longer read, but so thought provoking:
- What is a Large Language Model and why is it only recently important?
- What is GPT-3 and what are all these magical things it supposedly does?
- Can GPT-3 digest 1000 progress notes of a patient chart, say, and write a cogent 1-page summary for a clinician to digest rapidly? I’d pay for THAT.
‘‘The underlying idea of GPT-3 is a way of linking an intuitive notion of understanding to something that can be measured and understood mechanistically,’’ he finally said, ‘‘and that is the task of predicting the next word in text.’’
Prompt the algorithm with a sentence like ‘‘The writer has omitted the very last word of the first . . . ’’ and the guesses will be a kind of stream of nonsense: ‘‘satellite,’’ ‘‘puppy,’’ ‘‘Seattle,’’ ‘‘therefore.’’ But somewhere down the list — perhaps thousands of words down the list — the correct missing word appears: ‘‘paragraph.’’ The software then strengthens whatever random neural connections generated that particular suggestion and weakens all the connections that generated incorrect guesses. And then it moves on to the next prompt. Over time, with enough iterations, the software learns.Ilya Sutskever
There is all this discussion of “is this a sophisticated parrot” or “truly an artificial intelligence capable of generating new ideas.” Well, in our Electronic Health Record world, just the first item would be transformative, if we can get an AI to digest a hyperobject large set of data into an executive brief. Just that.
CMIO’s take? This is an important article by Steven Johnson in the New York Times Magazine. Watch this space; the development of GPT-3 heralds a qualitative improvement in AI language models; so much so that we feel compelled to teach it values and culture lest it start spewing hatred it learns on the internet. This is a worthwhile long read.