“Harvard Learning” preserving learning in the age of AI shortcuts (harvard.edu)

This is a sobering reflection. How do we learn when grasping information is so easy?

Preserving learning in the age of AI shortcuts

https://news.harvard.edu/gazette/story/2026/02/preserving-learning-in-the-age-of-ai-shortcuts/

Easy AI answers all the time?

How will we balance human learning against the constant temptation of “easy AI answers all the time”? Of 7000 high school students surveyed, about 40% indicated that they had failed to resist the temptation of over-using AI on their schoolwork.

Self-Regulation will be a crucial skill in our coming age of AI. OR at least, designing environments that support human self-regulation, because we are temptation-succumbing agents. At least, I am.

This is a great podcast, and you can listen, or read the transcript.

WiFi Blocking?

I like our best human thinkers as they grapple with the necessary productive cognitive friction that is needed for human brains to encode hard-earned knowledge. It is not good enough to have a super-search algorithm finding answers for you. There is something ineffable about being able to take a difficult problem, and reason through it as a human.

At the same time, we know that younglings have access to all the AI models out there. No amount of “adult protection” or “wifi blocking” or other pretend gatekeeping will keep smart kids from figuring out how to get to forbidden fruit.

So it remains, how would informaticists, physician teachers, all teachers, coaches, mentors, suggest we move forward when EVERYTHING is changing, and this new AI entity or entities are everywhere?

Idea 1: Invite our learners into the problem.

We recently struggled whether to grant our medical students access to Abridge, the ambient note solution offered at University of Colorado and UCHealth. Shouldn’t we prohibit students from using ambient notes? Don’t all of us teaching professors remember struggling to write comprehensible notes in our learning years, and now can quickly and incisively think through hard problems by writing notes that get to the root of the patient’s pathophysiology? Who hasn’t worked on writing a note, and in the process, discover an angle on the patient’s medical problem that was not obvious before starting the writing?

Now, if ambient notes are done at the end of the visit, where does that cognitive friction go? Removing this friction is perhaps a slight problem (and a big benefit) for experienced clinicians. However, removing this friction for students and residents might impair their learning just as it is most needed to form neural pathways, knowledge and … wisdom and judgement.

This year our graduate medical education leaders decided to give all students access. And then allow them to choose: how WILL you use it, knowing that it might impair your learning and there are as yet NO GUIDELINES? We must write these guidelines together. As a result, most students have chosen NOT to use ambient notes because of exactly that concern: they are in medical school for the training, NOT to simplify their work. This is a gratifying outcome.

Idea 2: Construct problems unsolvable by AI.

In this podcast, college professors describe writing problems specifically so that AI at present, cannot solve them. This is difficult work, and perhaps unsustainable if AI continues to improve at dramatic rates. It is an interim solution. Better yet…

Idea 3: Learners explore the human/AI interface.

Assume that everyone has access to AI, and then ask questions that could not be asked before. Specifically, have our learners ask questions that could not be asked before. And in answering them using AI, and critiquing each other, learning the field in a way not possible before.

In this podcast, college professors are now adjusting their curricula, instead of giving take-home exams that GPT can easily answer, they have in-class work where students design and ask mathematics problems that the AI cannot answer, and then have to figure out an answers the long-hand way. As a result, professors are seeing levels of learning and sophisticated understand that comes from exploring the space WITH an AI and all the extra reading needed to figure out what the edges of what an AI can do, and what the modern questions are in mathematics, and how to approach them. This also, is a winning approach.

This man is sad. But is he?

This man appears sad, but is he?

We have a new entity in every conversation

How might we keep human learning, human judgement, human embodied cognition front and center, when our old teaching methods no longer work? It is both terrifying and amazing to think what comes next.

5 questions clinicians should ask themselves when using AI in healthcare (JAMIA.org)

I like that smart colleagues are starting to write about automation bias, interruptions, skill decline. This academic paper poses 5 questions we should all be asking ourselves. So begins our hard work to welcome a new entity into the exam room, with careful forethought.

https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocaf123/8287602?login=true

From the discussion:

Effective AI integration requires human-centered and adaptive design. Five central research questions address: (1) what type and format of information AI should provide; (2) when information should be presented; (3) how explainable AI affects diagnostic decisions; (4) how AI influences automation bias and complacency; and (5) the risks of skill decay due to reliance on AI.

Read the article. Love the thoughtfulness and humanity.

The math on AI agents doesn’t add up (Wired.com)

I wish we had a crystal ball for what is coming. If we thought that the acceleration from internet-generations and rapid change was uncomfortable, what do we call this, the generative-AI-powered generational change? Hyper-change? Articles like this are fascinating glimpses behind the scenes at AI companies thinking ahead to AI agents and what they might be doing now, and able to do in a few weeks or months.

https://www.wired.com/story/ai-agents-math-doesnt-add-up/

Is AI Dulling our Minds *news.harvard.edu

This is very much on my mind as we race to embrace our newest AI assistants. If we outsource more and more of our tasks and thinking, does that make us duller? Yes and no, depending on …

Is AI dulling our minds?

I think this is a big deal; how will we work with our new AI partners?

We could either ask: “tell me the answer about XYZ” then we learn how to be great at copy and paste, but we don’t learn anything.

OR, we could ask: “Quiz me on the important principles of XYZ, and when I get it wrong, correct my understanding. Lead me to a deeper understanding of __”

When the AI becomes a helpful assistant, where I am the primary learner, this helps.

There is a big difference between cognitive ease for repetitive tasks that we don’t care to get faster at, VERSUS productive cognitive friction for topics that we, as humans, want to understand better.

Struggle is important for learning.

I have spoken.

--Ughnaught pilot Belin, from the Mandalorian

What if AI helped students learn, not just do (harvard.edu)

This is the beginning of the beginning. Teachers are starting to create generative AI that helps students learn, and NOT do the actual assignment. Imagine a chatbot where a student can ask questions outside of the classroom to understand concepts or ask it to critique initial writing. I like this very much. There is something here for medical residents and medical students, and indeed even practicing physicians. Tweaking the relationship between the AI assistant and the human is our hard work to come.

What if AI could help students learn, not just do assignments for them?

Holiday Songs Featuring Generative AI in Healthcare (didn’t see that coming did you?)

One of the great pleasures in life is to catch people in a moment of joyful surprise. My schtick is ukulele EHR parody songs when people don’t expect it. Here are two: ChatGPT, sung to Sweet Caroline, about AI draft replies to patients, chart summaries and ambient notes. Then, EHR Wonderland, about Abridge and ambient note experience. OK, strictly speaking, only one is a holiday song parody, but who’s counting. Happy holidays!

ChatGPT – to Sweet Caroline

 

EHR Wonderland – to Winter Wonderland

Vibe Coding (Wired.com)

The term Vibe Coding, I take to mean, an AI does the actual coding that a human tells it to do. Here’s a WIRED reporter learning to do just that. Insightful read.

https://www.wired.com/story/why-did-a-10-billion-dollar-startup-let-me-vibe-code-for-them-and-why-did-i-love-it

 

A High Schooler: AI is demolishing my education (Atlantic), and my reaction about AI in healthcare

High schooler examples of how AI is ruining education in the classroom: what can healthcare AI learn from these examples? How do we pivot from no-win to win-win? Here’s my take.

https://www.theatlantic.com/technology/archive/2025/09/high-school-student-ai-education/684088/?gift=PBeYFZIia8gyZzvvApdrZHEndyptCKBp5r-R8daZseM&utm_source=copy-link&utm_medium=social&utm_campaign=share

Read the Atlantic article with my gift link above ^^

Generative AI in the classroom:

  • Cheating on take-home exams (chatbot will answer any exam question)
  • Cheating on in-class discussion (chatbot in real-time presents excellent discussion points on any topic)
  • Cheating in debate competition (chatbot helps teams prepare a rebuttal between tournament rounds)
  • The risk: that class on European History is actually a class on “How to copy and paste answers from AI” and no learning is achieved.

The rare positive story from the education field shows us a glimmer of hope. A professor assigns a homework task that explicitly asks for the student to use generative AI to create a first draft, and then to use the draft to write a critique of the AI-written document, demonstrating command of the material and ability to critique others’ work.

Generative AI (I’ll abbreviate Gen-AI) in healthcare:

  • Gen-AI composes an excellent progress note summarizing a physician and patient conversation, within seconds of the end of the visit, reducing physician cognitive and time burden
  • Gen-AI helps document more diagnoses and perhaps more accurately because it is captured and generated within seconds of a visit and not hours or weeks later when physician memory fades
  • Gen-AI replies to patient online questions by drafting a reasonable reply based on prior EHR (electronic health record) data, to reduce nurse and physician typing burden
  • Gen-AI helps summarize hundreds of pages of medical records to speed up nurse and physician work as they meet new patients with years of data

So far so good. These are all win-win scenarios: doctors and nurses work more quickly and easily, patients get better care.

It gets touchy:

  • Gen-AI helps doctors prepare “prior authorization” documents to advocate for patients getting insurers to pay for treatments. This is directly opposed by Gen-AI helping insurers deny these requests. This is a no-win situation.
  • Gen-AI helps doctors generate higher quality, more complete notes that show that complex care was provided to the patient, possibly improving reimbursement. This is directly opposed by Gen-AI helping insurers spot such changes. Another no-win situation.

None of the healthcare examples elicit from me any sense of “cheating” as for high school or college students. But it is clear that this new “Gen-AI” entity is changing the conversation.

Depending on the context, Gen-AI is a powerful ally to improve healthcare. At other times, Gen-AI is a no-win arms race that sucks up expensive electrical power on both sides and the battle lines don’t move.

CMIO’s take?

Where can we turn the generative AI conversation from backward-thinking no-win situations to lateral-thinking win-win conversations? The first category is pure waste. The second is much harder and much more important. This is the struggle CMIO’s and our analogues in other fields must take on.

Sleeptime Compute and AI forgetting? (WIRED)

The issue of short and long term memory is an increasingly interesting problem in generative AI. Maybe sleep is an important consolidation function for computers as well as humans …

https://www.wired.com/story/sleeptime-compute-chatbots-memory

 

Grok-board: the 2025 update (instantly empathize and understand the EHR)

As tech accelerates, I wonder what has AI done for me lately? Sure it writes my notes, but does it really help me “grok” this patient? Here is my updated Grok-board design take for EHR’s.

I am worried. Just because our whiz-bang technologies can do lots of stuff, I ask: should it? I previously wrote about my Grok-board idea in 2024. In one short year, we have seen ambient notes, chart summarizers and agents take off.

Here, I still aspire to use an EHR that helps me “grok” the patient. Grok comes originally from Robert Heinlein’s “Stranger in a Strange Land” sci-fi novel from 1961, wow more than 60 years ago. He coined the term “Grok” meaning to “instantly empathize and understand.”

In my 2025 version, the left column of the EHR remains intact, with links to many useful parts of an existing EHR patient chart.

What does it mean?

  • Blue Column 1: The Patient story board. Links to patient details: demographics, allergies, primary care doc, care gaps, selected chart items.
  • Green Column 2: This Human: the patient telling us what is important to them, what are their joys and pressures.
  • Orange Column 3: Homunculus visual problem list and an AI summary of the current status of diseases with any available metrics.
  • Pink Column 4: Insights. Active and suggested AI agents, artificial intelligence entities that can obtain data, and then act on predetermined criteria to achieve a general goal, assess the patient’s top risks and suggested next steps.
  • Blue Column 5: Today: pre-visit questionnaires, solicited patient questions, the last progress note you wrote, and an AI Greek Chorus of somewhat adversarial advisors with suggestions for me.

An ideal Grok-board

should be humane and emphasize the patient’s identity and goals, so a physician can connect human-to-human and communicate more effectively. It should also increase the signal-to-noise of the information presented. It should make it quick to grok and then act. It should prioritize the most important next actions, then make doing the right thing easy

Thanks to Gregory Makoul for his groundbreaking work on connecting patient values to physician thinking. Let’s continue to debate the psychology of which displays gives the quickest, most useful view of the patient. Let’s arm-wrestle, not over sorting of the problem list, but over which display leads to better care of the patient with less cognitive burden and more joy for physicians, the clinical team, and the patient.

Taking it further

Let’s design a video gamer’s chair that helps you use all your senses. What if your left forearm vibrates to warn you of a medication allergy when you’re writing a prescription? What if your right calf feels warm if there is kidney impairment or left calf feels warm if there is liver impairment affecting your next decision. What if your lower back vibrates if you’re about to close a chart with unaddressed care gaps. What if we had smell-o-vision to detect the fruity breath of a patient in keto-acidosis?

CMIO’s take?  We are underusing our senses and our pattern-matching skills. Let’s build a way for our physicians and teams to grok the patient and then make it easy to do the right thing. We must intentionally build our humanity into this future.