“Harvard Learning” preserving learning in the age of AI shortcuts (harvard.edu)

This is a sobering reflection. How do we learn when grasping information is so easy?

Preserving learning in the age of AI shortcuts

https://news.harvard.edu/gazette/story/2026/02/preserving-learning-in-the-age-of-ai-shortcuts/

Easy AI answers all the time?

How will we balance human learning against the constant temptation of “easy AI answers all the time”? Of 7000 high school students surveyed, about 40% indicated that they had failed to resist the temptation of over-using AI on their schoolwork.

Self-Regulation will be a crucial skill in our coming age of AI. OR at least, designing environments that support human self-regulation, because we are temptation-succumbing agents. At least, I am.

This is a great podcast, and you can listen, or read the transcript.

WiFi Blocking?

I like our best human thinkers as they grapple with the necessary productive cognitive friction that is needed for human brains to encode hard-earned knowledge. It is not good enough to have a super-search algorithm finding answers for you. There is something ineffable about being able to take a difficult problem, and reason through it as a human.

At the same time, we know that younglings have access to all the AI models out there. No amount of “adult protection” or “wifi blocking” or other pretend gatekeeping will keep smart kids from figuring out how to get to forbidden fruit.

So it remains, how would informaticists, physician teachers, all teachers, coaches, mentors, suggest we move forward when EVERYTHING is changing, and this new AI entity or entities are everywhere?

Idea 1: Invite our learners into the problem.

We recently struggled whether to grant our medical students access to Abridge, the ambient note solution offered at University of Colorado and UCHealth. Shouldn’t we prohibit students from using ambient notes? Don’t all of us teaching professors remember struggling to write comprehensible notes in our learning years, and now can quickly and incisively think through hard problems by writing notes that get to the root of the patient’s pathophysiology? Who hasn’t worked on writing a note, and in the process, discover an angle on the patient’s medical problem that was not obvious before starting the writing?

Now, if ambient notes are done at the end of the visit, where does that cognitive friction go? Removing this friction is perhaps a slight problem (and a big benefit) for experienced clinicians. However, removing this friction for students and residents might impair their learning just as it is most needed to form neural pathways, knowledge and … wisdom and judgement.

This year our graduate medical education leaders decided to give all students access. And then allow them to choose: how WILL you use it, knowing that it might impair your learning and there are as yet NO GUIDELINES? We must write these guidelines together. As a result, most students have chosen NOT to use ambient notes because of exactly that concern: they are in medical school for the training, NOT to simplify their work. This is a gratifying outcome.

Idea 2: Construct problems unsolvable by AI.

In this podcast, college professors describe writing problems specifically so that AI at present, cannot solve them. This is difficult work, and perhaps unsustainable if AI continues to improve at dramatic rates. It is an interim solution. Better yet…

Idea 3: Learners explore the human/AI interface.

Assume that everyone has access to AI, and then ask questions that could not be asked before. Specifically, have our learners ask questions that could not be asked before. And in answering them using AI, and critiquing each other, learning the field in a way not possible before.

In this podcast, college professors are now adjusting their curricula, instead of giving take-home exams that GPT can easily answer, they have in-class work where students design and ask mathematics problems that the AI cannot answer, and then have to figure out an answers the long-hand way. As a result, professors are seeing levels of learning and sophisticated understand that comes from exploring the space WITH an AI and all the extra reading needed to figure out what the edges of what an AI can do, and what the modern questions are in mathematics, and how to approach them. This also, is a winning approach.

This man is sad. But is he?

This man appears sad, but is he?

We have a new entity in every conversation

How might we keep human learning, human judgement, human embodied cognition front and center, when our old teaching methods no longer work? It is both terrifying and amazing to think what comes next.

The Cholera Pump and the Oldest Operating Theater in London

Ever asked ChatGPT for tourism advice for medical professionals? I did. Its not what you think.

The Oldest Operating Theatre in London.

  • Totally worth a visit. My favorite story: a surgeon asked to be buried in his clothing when he died. The authorities did not respect his wishes, and when undressed for the casket, they found he was a woman. Served an entire career as a surgeon, in an age when women were not allowed to be physicians or surgeons. 
  • The theatre, here, lit only by skylight, with a steeply graded theater so all could see down into the body of the patient undergoing surgery. Survival rates were typically 30%, death usually from exsanguination or more likely infection, in the days before germ theory. 
  • Too many other stories and artifacts to recount. See if you can. 

 

The Cholera pump!

  • A reproduction of the original pump in London that ended up being the source of a major cholera epidemic. This is the spot of the birth of epidemiology, with the key insight by Dr. John Snow. 
  • At the time, cholera was a deadly disease, but unknown what caused it and how it was transmitted. Many thought it was due to bad smells called “miasma.”
  • The book “The Ghost Map” if you haven’t read it, is a riveting account. 
  • It all happened here, and John Snow plotted the graph and realized all the deaths centered geographically on this one pump.
  • Perhaps cholera was not airborne, but carried in the water from the pump!
  • He came, removed the pump handle AND THE PANDEMIC STOPPED. 

 

I guess I’m just a fan-boy of medical history. 

5 questions clinicians should ask themselves when using AI in healthcare (JAMIA.org)

I like that smart colleagues are starting to write about automation bias, interruptions, skill decline. This academic paper poses 5 questions we should all be asking ourselves. So begins our hard work to welcome a new entity into the exam room, with careful forethought.

https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocaf123/8287602?login=true

From the discussion:

Effective AI integration requires human-centered and adaptive design. Five central research questions address: (1) what type and format of information AI should provide; (2) when information should be presented; (3) how explainable AI affects diagnostic decisions; (4) how AI influences automation bias and complacency; and (5) the risks of skill decay due to reliance on AI.

Read the article. Love the thoughtfulness and humanity.

The math on AI agents doesn’t add up (Wired.com)

I wish we had a crystal ball for what is coming. If we thought that the acceleration from internet-generations and rapid change was uncomfortable, what do we call this, the generative-AI-powered generational change? Hyper-change? Articles like this are fascinating glimpses behind the scenes at AI companies thinking ahead to AI agents and what they might be doing now, and able to do in a few weeks or months.

https://www.wired.com/story/ai-agents-math-doesnt-add-up/

Medicine: the last uncompressed profession? (Liminal MD)

Bryan Vartabedian calls out the idea of “compression” where standard process and measurement have transformed many professions. Yet, medical care at its root is a sick patient that may not fit easily into a measurable box. Healthcare as a profession cannot be compressed. A must read.

Medicine as the last uncompressed profession

The difficult uncompressible work of medicine is unmeasurable. How might we measure “difficult diagnosis”?

What value do we place on the master diagnostician who can sort out many vague complaints into an unexpected diagnosis, when no one else can?

Sure, routine treatment of straightforward diabetes or asthma can be ‘compressed” into measurable process and goals, but “failure to thrive” or “fragile elderly with polypharmacy” or “internal medicine patient with 17 diagnoses who is falling more often” or “undifferentiated severe abdominal pain” doesn’t fit anywhere, can’t be measured, and its care cannot be made more efficient.

Are we training a generation of doctors who are excellent at compressed care? When our older generation of doctors retire, will patients with unmeasurable suffering no longer have someone to care for them?

In all the noise of “Value based care” and “Private Equity” and “squeezing the inefficiency out of the bloated US healthcare industry” where are the quiet ones, the hidden heroes out there caring for patients?

Yes, Generative AI is coming. Yes, EHR tools and “clinical decision support.” Yes, advances in team-based, multidisciplinary, highly coordinated care.

And yet.

Where are the ones championing the statement attributed to Hippocrates:

Cure sometimes, comfort often, care always.

iPhone notes app is the purest reflection of our humanity (Wired.com) and a medical informatics observation

What’s on your notes app in your phone? WIRED argues that this simple, unfiltered blank page is the easiest place for us to store our unfiltered thoughts. How true. For me: fragments of blog post ideas, books I hear about, movies to watch, hilarious quote from family members, messy to-do lists. Hotel room numbers. Parking garage locations. Who knows? What’s on yours?

https://www.wired.com/story/iphone-notes-app-purest-reflection-of-our-humanity/

Sometimes the simplest note-taking apps are the most profound.

As medical records technologists going back to the 1800’s discovered, if we over-engineer our tools, doctors and nurses will break the bounds of what is allowable documentation to let the story come out.

From Annals of Internal Medicine (requires login) a brilliant history of medicine article by Eleanor Siegel

https://www.acpjournals.org/doi/10.7326/0003-4819-153-10-201011160-00012

The image:

What is fascinating is that: in the 1800’s, hospitals began keeping paper medical records, one book for each HOSPITAL WARD of about a dozen patients. No patient-specific medical records. If you wanted to look back, you would find the ward book for the year, find the day the patient was in the hospital, then look for the patient’s name.

Each patient would be an entry on the page for the day. There was only room for ‘intervention’ and ‘outcome’. No place to write thoughts, observations, theories, learnings.

So, doctors would at times turn the page over and use the blank back of the paper to write (in this case):

This patient came in with what appeared to be an apoplectic stroke. He was interesting in that he had a dextrocardia. He later developed a clinical picture which we could not explain.
Diagnosis: Hemorrhage into cerebrum
Complication: ?Syphilis

Kpop Demon Hunters, yes, I’m a fanatic. Interview with EJAE (wired.com)

I am a huge fan of Kpop, and of Kpop Demon Hunters specifically. Read the interview with EJAE. I very much align with her Asian-American vibe and insights. So cool for a colleague’s success and for the American melting-pot. And the song and movie: excellent as well.

https://www.wired.com/story/how-k-pop-demon-hunters-star-ejae-topped-the-charts/

Do pictures change patient behaviors (James Stein substack)

I don’t often cite other blogs, but this is a worthwhile quick read about changing patient behaviors. Wonderful story about a physicians failure with a patient and the lessons he draws from using a calcium score to try to get a patient to change. Did not go well. And I will change how I think about this as a result.

https://jamesstein18.substack.com/p/do-pictures-change-patient-behaviors

Is AI Dulling our Minds *news.harvard.edu

This is very much on my mind as we race to embrace our newest AI assistants. If we outsource more and more of our tasks and thinking, does that make us duller? Yes and no, depending on …

Is AI dulling our minds?

I think this is a big deal; how will we work with our new AI partners?

We could either ask: “tell me the answer about XYZ” then we learn how to be great at copy and paste, but we don’t learn anything.

OR, we could ask: “Quiz me on the important principles of XYZ, and when I get it wrong, correct my understanding. Lead me to a deeper understanding of __”

When the AI becomes a helpful assistant, where I am the primary learner, this helps.

There is a big difference between cognitive ease for repetitive tasks that we don’t care to get faster at, VERSUS productive cognitive friction for topics that we, as humans, want to understand better.

Struggle is important for learning.

I have spoken.

--Ughnaught pilot Belin, from the Mandalorian

Can a Hydroelectric Dam Make the Days Longer? *Wired.com

I love questions like this, that don’t make sense, then then slowly start to make sense, and then draw you into the math and science and … woop! There’s an answer to the question.

https://www.wired.com/story/can-a-hydroelectric-dam-really-make-the-days-longer/

Math and science for the win, on unexpected questions.