Hyperdimensional Computing Reimagines AI (wired.com)

Either I’m full of myself and over-optimistic about what I can learn, or I begin to understand dimensional computing as explained in this wired article. If so, super cool what this new way of computing holds for “explainability” of AI models in the future…

From wired.com

https://www.wired.com/story/hyperdimensional-computing-reimagines-artificial-intelligence/

In college I got all the way through Calculus I and II and into differential equations and a little into matrices and vectors. I can honestly say I have used NONE of that knowledge, and it has withered completely away in the intervening decades.

THIS article got me interested. Our contemporary problems: Large Language Models, at their root artificial neural networks, compute in a way that is very power-intensive. We are seeing this already in how OpenAI and others are worrying about scaling LLM’s to more users, moving the sophistication upward from GPT 3.0 to 3.5 to 4.0 with more and more layers.

Vector or Dimensional computing holds the promise of changing the paradigm for tracking findings, storing concepts, and manipulating them more easily in, lets say, instead of a flat table of data, into 10,000 dimensional vector space.

CMIO’s take? Although this sounds like a lot of woo-woo, read the article to get the low down. Ready slowly. It took me some time to begin to get it. The reward is, maybe glimpsing a future where AI models can be made to be Explainable. Something not possible at present with LLM’s. Could be game changing.

The Fatal Uber Self Driving Car Saga is Over (NYTimes) and Automation Complacency

The classic case of the Uber self driving car, in-person monitor, and the fatal pedestrian accident, is now legally concluded. A sober reminder of ALL OUR complacencies. Or paraphrasing Shakespeare: “But, soft ye now, the fair Ophelia, nymph in thy orisons, be all my complacencies remembered.”

https://www.wired.com/story/ubers-fatal-self-driving-car-crash-saga-over-operator-avoids-prison/

This made news a few years ago: a self driving Uber, under testing with a monitoring person in the driver seat, hit and killed a pedestrian (who was not as a crosswalk).

The monitoring driver plead guilty to reckless endangerment and will avoid prison time.

This reminds us of the constant and growing influence of AI and automation on our daily lives. We are all less vigilant when an assistant gets really good. Maybe 99% effective, maybe 99.9, 99.999, like with self-driving vehicles. What happens to the 0.001%?

Recently I was criticized by a medical colleague because “I wrote a prescription for a muscle relaxer, and it caused a drug interaction with the patient’s birth control medication. Epic did NOT stop me, and it should have.” The implication was, that it was Epic’s fault, and thus, those who configure Epic (CT Lin and his henchmen).

CMIO’s take? Classic automation complacency. We give the automation power over our daily lives and we stop watching carefully. Have you seen this in your work? Let me know.

ChatGPT in the classroom: professors adapt (wired.com)

If you can’t beat ’em, join ’em.

 

from wired.com

https://www.wired.com/story/dont-want-students-to-rely-on-chatgpt-have-them-use-it/

I love this. Time for humans to adapt. How might we use this in healthcare, the idea that students / learners / ?Patients take such an assistant (currently not good enough to be the gold standard) and use it as a tool, an assistant, a learning partner INSTEAD of rebelling against the inevitable tide of the future?

Epic Man – 2023: world premiere at AMIA CIC

The latest ukulele parody. Does it mention note bloat? GPT?

The Clinical Informatics Conference has come to a close in Chicago. I am always gratified by the community of clinical informaticists who come together to share ideas, to make each other better.

I had a chance to participate in 2 panels: Blowing up the Classroom by deconstructing training / Putting the Roadsigns on the Highway. Also: Redesigning the Inbasket, along with colleagues from UCSF, Epic and MedStar.

Here’s my contribution to the fun; an updated version of Epic Man.

The author still believes he can sing.

Automation Complacency, The Stepladder of AI in EHR’s, “Writing a note is how I think”. WHAT NOW?

A navel-gazing reflection on GPT, human cognitive effort, and the stepladder to the future. Where do YOU stand?

The image above generated by DALL-E embedded in the new BING, with the prompt “Doctors using a computer to treat patients, optimistic futuristic impressionistic image”. Wow. Not sure what the VR doctor coming out of the screen is doing.

Thanks to Dr. Brian Montague for prompting this post with his quote during a recent Large PIG meeting:

I find that I do a lot of my thinking when I write my progress note. If/when ChatGPT starts to write my note, when will I do that thinking?  — Brian Montague MD

That stopped me in my tracks.

We are so hell-bent on simplifying our work, reducing our EHR burden, we sometimes forget that this work is MORE than just pointing, clicking and typing.

It is also about THINKING. It is about assembling the data, carefully coaxing information and patterns out of our patients through skillful interview, parsimonious lab testing, and careful physical examination. It is how we, as physicians and APP’s, use our bodies and minds to craft an image of the syndrome, the disease: our hidden opponent.

Just like inserting a PC into the exam room changed dynamics, inserting GPT assistants into the EHR causes us to rethink … everything.

Pause to reflect

First, I think we should recall the technology adoption curve.

I fully acknowledge that I am currently dancing on the VERY PEAK of the peak of over-inflated expectations. Yes. That’s me right at the top.

Of concern, viewing the announcements this week from Google, Microsoft, and many others gives me chills (sometimes good, sometimes not) of what is coming: automated, deep-fake videos? Deep-fake images? Patients able to use GPT to write “more convincing” requests for … benzodiazepines? opiates? other controlled meds?

AND YET, think of the great things coming: GPT writing a first draft of the unending Patient Advice Requests coming to doctors. GPT writing a discharge summary based on events in a hospital stay. GPT gathering data relating to a particular disease process out of the terabytes of available data.

And where do we think physician/APP thinking might be impacted by excessive automation?

Automation Complacency

I refer you back to my book review of the book “The Glass Cage” by Nicholas Carr. As I said before, although this was written to critique the aircraft industry, I took it very personally as an attack on my whole career. I encourage you to read it.

In particular, I found the term “automation complacency” a fascinating and terrifying concept: that a user, who benefits from automation, will start to attribute MORE SKILL to the automation tool than it actually possesses, a COMPLACENCY that “don’t worry, I’m sure the automation will catch me if I make a mistake.”

We have already seen this among our clinicians, one of whom complained: “Why didn’t you warn me about the interaction between birth control pills and muscle relaxants? I expected the system to warn me of all relevant interactions. My patient had an adverse reaction because you did not warn me.”

Now, we have this problem. We have for years been turning off and reducing the number of interaction alerts we show to prescribers precisely because of alert fatigue. And now, we have complaints that “I want what I want when I want it. And you don’t have it right.” Seems like an impossible task. It IS an impossible task.

Thank you to all my fellow informaticists out there trying to make it right.

GPT and automation: helping or making worse?

Inserting a Large Language Model like GPT, that understands NOTHING, but just seems really fluent and sounding like an expert, could be helpful, but could also lull us into worse “automation complacency.” Even though we are supposed to (for now) read everything the GPT engine drafts, and we take full ownership of the output, how long will that last? Even today, I admit, as do most docs, that I use Dragon speech recognition and don’t read the output as carefully as I might.

Debating the steps in clinician thinking

So, here is where Dr. Montague and I had a discussion. We both believe it is true that a thoughtful, effective physician/APP will, after interviewing the patient and examining them, sit with the (formerly paper) chart, inhale all the relevant data, assemble it in their head. In the old days, we would suffer paper cuts and inky fingertips in this process of flipping pages. Now we just get carpal tunnel and dry eyes from the clicking, scrolling, scanning and typing.

Then when we’ve hunted and gathered the data, we slowly, carefully write an H/P or SOAP note (ok, an APSO-formatted SOAP note). It will include the Subjective (including a timeline of events), Objective (including relevant exam, lab findings), Assessment (assembly of symptoms into syndromes or diseases) and Plan (next steps to take).

During this laborious note-writing, we often come up with new ideas, new linkages, new insights. It is THIS PIECE we worry most about. If GPT can automate many of these pieces, WHERE WILL THE THINKING GO!?! I do not trust that GPT is truly thinking. I worry that the physician will instead STOP THINKING.

Then THERE IS NO THINKING.

Is this a race-to-the-bottom, or a competition to see who can speed us up so much that we are no longer healers, just fast documenters, since we are so burned out?

Who will we be?

Radio vs TV vs Internet

My optimistic thought is this. Instead of GPT coming to take our jobs, I’m hopeful GPT becomes a useful assistant, sorting through the chaff, sorting and highlighting the useful information in a data-rich, information-poor chart.

Just like the radio industry feared that TV would put them out of business (they didn’t), and TV feared that the Internet would put them out of business (they didn’t), the same, I think, goes for physicians, established healthcare teams, and GPT-automation tools.

Lines will be drawn (with luck, WE will draw them), and our jobs will change substantially. Just like emergent (unpredictable) properties like “GPT hallucinations” have arisen, we must re-invent our work as unexpected curves arise while deploying our new assistants.

Another Bing-Dall-E image of physicians at a computer. In the future, a doctor will apparently have more legs than before.

A possible step-ladder

I think physician thinking really occurs at the assembly of the Assessment and Plan. And that the early days of GPT assistance will begin in the Subjective and Objective sections of the note. GPT could for example:

SIMPLE
  • Subjective: Assemble a patient’s full chart on-demand for a new physician/APP meeting a patient in clinic, or on admission to hospital, focusing on previous events in can find in the local EHR or across a Health information exchange network, into an easily digestible timeline. Include a progression of symptoms, past history, past medications.
  • Objective: Filter a patient’s chart data to assemble a disease-specific timeline and summary: “show me all medications, test results, symptoms related to chest infection in the past year”
  • Then leave the assessment and planning to physician/APP assembly and un-assisted writing. This would leave clinician thinking largely untouched.
MODERATE
  • Subjective and Objective: GPT could take the entire chart and propose major diseases and syndromes it detects by pattern matching and assemble a brief page summary with supporting evidence and timeline, with citations.
  • Assessment and Plan: Suggest a prioritized list of Problems, severity, current state of treatment, suggested next treatments, based on a patient’s previous treatments and experience, as well as national best practices and guidelines. Leave the details, treatment adjustments and counseling to physicians/APPs interacting with the patient. Like Google Bard, GPT may suggest ‘top 3 suggestions with citations from literature or citations from EHR aggregate data’ and have the physician choose.
DREAMY/SCARY
  • Subjective and Objective: GPT could take the Moderate tools, add detection and surveillance for emerging diseases not yet described (the next Covid? the next Ebola? new-drug-associated-myocarditis? tryptophan eosinophilia-myalgia syndrome, not seen since 1989?) for public health monitoring. Step into the scanner for full body photography, CT, MRI, PET, with a comprehensive assessment in 1 simple step.
  • Assessment and Plan: GPT diagnoses common and also rare diseases via memorizing 1000’s clinical pathways and best-practice algorithms. GPT initiates treatment plans, needing just physician/APP cosignature.
  • A/P: Empowered by Eliza – like tools for empathy, takes on counseling the patient, discovering what conversational techniques engender the most patient behavior change. Recent studies already indicate that GPT can be considered more empathetic than doctors responding to online medical queries.

CMIO’s take? First things first. While we can wring our hands about “training our replacements”, there is lots yet to do and discover about our newest assistants. Shall we go on, eyes open?

Chatbot perspective from an insider (Rodney Brooks and Wired.com)

What Will Transformers Transform?

Thanks Rodney for a thoughtful discussion of

  • The Hype Cycle (peak of overinflated expectations)
  • The caution needed as our tools grow in skill exponentially
  • The ongoing risk of hallucination and unexpected errors in chatbots
  • The “grounding” problem with AI and robots

I particularly love the following quote:

Roy Amara, who died on the last day of 2007, was the president of a Palo Alto based think tank, the Institute for the future, and is credited with saying what is now known as Amara’s Law:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

I’m feeling an upward acceleration of AI skill with GPT applications, which could open up substantial risks as well as benefits.

For example, in the EHR space, there is discussion that GPT could take a patient and physician’s recorded conversation and automatically write a progress note, qualitatively more accurate and much faster than current commercial tools. Further, it could potentially summarize weeks of progress notes on hospitalized patients and write the discharge summary, a document that, when well-written, can take a human many hours of work. Or even, receive patients’ incoming MyChart messages and clinical questions and “reply to the patient in the voice of their clinician” based on the decade of writing by that clinician in the EHR.

Sure, these seem great. How about the potential deluge of GPT agents writing notes, requests ON BEHALF of patients, or other authors, that could junk up our systems? If 183,000 incoming patient messages is a lot (current monthly patient message volume at UCHealth), what if GPT somehow enabled 10x that number?

How much discussion will be GPT talking to GPT on behalf of employers/ patients/ discussants? I understand science fiction editors now have a 10x increase in sci-fi story submissions, a SUBSTANTIAL FRACTION now being written by GPT based on prior stories?

Too, wired.com has written about this:

https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/

“I worry that we are very much in a ‘move fast and break things’ phase,” says Holstein, adding that the pace might be too quick for regulators to meaningfully keep up. “I like to think that we, in 2023, collectively, know better than this.”

CMIO’s take? Take a breath, everyone. It’s going to be a bumpy ride.

I Saw the Face of God in a TSMC Factory (Wired)

Nanometer scale production, Taiwan / Ukraine geopolitics, Huawei and spying in 5G, remote surgery, democracy at risk. What more can you ask for from a Wired Longread?

https://www.wired.com/story/i-saw-the-face-of-god-in-a-tsmc-factory/

I see myself in this article. Hints of:

  • My Asian background and the hunger for achievement
  • The constant threat of conflict
  • The importance of being comfortable with ambiguity
  • The unease of global tensions and political disagreement
  • The cultural clash between Asian and American habits
  • Moore’s law playing out in our lifetime
  • Lithography as art and the pinnacle of tech
  • The necessity of trust to drive innovation and growth
  • The knowledge that TSMC tech powers the vast majority of devices and servers without which American healthcare’s Electronic Health Records cannot exist.

Yes, it is a long read. This is what deeply researched, wide-ranging, thoughtful writing is about.

I’m a Taiwanese native American citizen. Taiwan Semiconductor Manufacturing Corporation (TSMC) probably isn’t a household name for most, and yet it produces the vast majority of the most advanced chips for the most advanced smartphones, laptops and computers. TSMC makes me proud to have been born on that tiny island.

CMIO’s take? I enjoyed this very much. This is a brilliant read. You may not agree with all of it, but it is a fascinating journey into the interconnectedness of our personal relationships, our technologies, our trust, our nations and leaders and nothing less than the future of our world.

AI will make human art more valuable (Wired.com)

What is it about human made art that makes us prefer that to AI art?

https://www.wired.com/story/art-artificial-intelligence-history/

Partly reassuring, partly cognitive puzzle. It turns out, if shown the SAME images labelled differently “made by human” “made by robot”, we will prefer the former. What does that say about us?

Infinite AI Interns for Everybody (Wired.com)

https://www.wired.com/story/artificial-intelligence-labor/

Here is our next AI challenge, as our jobs undergo gradual transformation. How will we as knowledge workers in informatics accommodate the growing sophistication of narrow AI assistants? Scheduling appointments, helping with spelling and grammar, now writing fluid manuscripts based on the library we point them to?

Teachers are starting to change “take home” assignments to in-class writing with no internet connection, to ensure students don’t outsource their thinking/writing to an AI chatbot.

What will we do? Can an AI replace us in writing the “one-pager” that summarizes thinking and succinctly and convincingly makes the case for change? Is Machiavelli vulnerable to being toppled? Do we no longer have need for governance and leadership if we can outsource thinking and fluency to an AI? Will my AI go up against your AI in the battle for mindshare?

Yes.

CMIO’s take? Yet again, time to re-imagine our jobs with the tools we have at hand.

META introduces AI-generated video

Another take on ‘AI is coming for you and your job’

https://www.technologyreview.com/2022/09/29/1060472/meta-text-to-video-ai/

Well, you thought AI-generated static images were both cool and concerning. Now, META (formerly Facebook) introduces AI-generated VIDEO based on simple user inputs. If distinguishing fake from real was hard before, where is this going?

%d bloggers like this: