Vibe Coding (Wired.com)

The term Vibe Coding, I take to mean, an AI does the actual coding that a human tells it to do. Here’s a WIRED reporter learning to do just that. Insightful read.

https://www.wired.com/story/why-did-a-10-billion-dollar-startup-let-me-vibe-code-for-them-and-why-did-i-love-it

 

Sharing Science Through Story: Fergus McAuliffe at TEDxDublin

How can dry science be communicated in a way that the public can understand? How can science recover the standing that it had years ago, when the Royal Society in London was THE place to be, to hear scientists talk about their latest work? In fact, Albemarle street had to made ONE WAY, the first one-way street, because of the popularity of these talks that the traffic was otherwise unmanageable? This is a compelling talk you have to hear.

Fergus McAuliffe, scientist, tells of the key elements of science: precise language, objective findings, volumes of data.

He points out that these are also the barriers that keep science communication from being effective with public audiences: too dry, too much, not engaging.

The solution: STORY.

CMIO’s take? This is 13 minutes of your life that will serve you well. Communicate science through story.

 

A High Schooler: AI is demolishing my education (Atlantic), and my reaction about AI in healthcare

High schooler examples of how AI is ruining education in the classroom: what can healthcare AI learn from these examples? How do we pivot from no-win to win-win? Here’s my take.

https://www.theatlantic.com/technology/archive/2025/09/high-school-student-ai-education/684088/?gift=PBeYFZIia8gyZzvvApdrZHEndyptCKBp5r-R8daZseM&utm_source=copy-link&utm_medium=social&utm_campaign=share

Read the Atlantic article with my gift link above ^^

Generative AI in the classroom:

  • Cheating on take-home exams (chatbot will answer any exam question)
  • Cheating on in-class discussion (chatbot in real-time presents excellent discussion points on any topic)
  • Cheating in debate competition (chatbot helps teams prepare a rebuttal between tournament rounds)
  • The risk: that class on European History is actually a class on “How to copy and paste answers from AI” and no learning is achieved.

The rare positive story from the education field shows us a glimmer of hope. A professor assigns a homework task that explicitly asks for the student to use generative AI to create a first draft, and then to use the draft to write a critique of the AI-written document, demonstrating command of the material and ability to critique others’ work.

Generative AI (I’ll abbreviate Gen-AI) in healthcare:

  • Gen-AI composes an excellent progress note summarizing a physician and patient conversation, within seconds of the end of the visit, reducing physician cognitive and time burden
  • Gen-AI helps document more diagnoses and perhaps more accurately because it is captured and generated within seconds of a visit and not hours or weeks later when physician memory fades
  • Gen-AI replies to patient online questions by drafting a reasonable reply based on prior EHR (electronic health record) data, to reduce nurse and physician typing burden
  • Gen-AI helps summarize hundreds of pages of medical records to speed up nurse and physician work as they meet new patients with years of data

So far so good. These are all win-win scenarios: doctors and nurses work more quickly and easily, patients get better care.

It gets touchy:

  • Gen-AI helps doctors prepare “prior authorization” documents to advocate for patients getting insurers to pay for treatments. This is directly opposed by Gen-AI helping insurers deny these requests. This is a no-win situation.
  • Gen-AI helps doctors generate higher quality, more complete notes that show that complex care was provided to the patient, possibly improving reimbursement. This is directly opposed by Gen-AI helping insurers spot such changes. Another no-win situation.

None of the healthcare examples elicit from me any sense of “cheating” as for high school or college students. But it is clear that this new “Gen-AI” entity is changing the conversation.

Depending on the context, Gen-AI is a powerful ally to improve healthcare. At other times, Gen-AI is a no-win arms race that sucks up expensive electrical power on both sides and the battle lines don’t move.

CMIO’s take?

Where can we turn the generative AI conversation from backward-thinking no-win situations to lateral-thinking win-win conversations? The first category is pure waste. The second is much harder and much more important. This is the struggle CMIO’s and our analogues in other fields must take on.

Surprising way to boost your attention span (NYTimes)

More research on how “nature therapy” adds up to improved attention span and working memory and restoration for our depleted brains from work and school. Walking 2.8 miles in an arboretum vs walking in a city. I wonder if this restoration applies to cycling in wooded paths. Asking for a friend.

www.nytimes.com/2025/08/14/well/mind/nature-brain-attention.html

 

Passwords are so last year. Passkeys! (Wired.com)

I have joined the passkey (quiet) revolution. Far superior even to long passwords, and better and faster than 2 factor and multifactor authentication. I’m all for faster and easier security for my accounts.

https://www.wired.com/story/what-is-a-passkey-and-how-to-use-them/

Worms to eat plastic?! Wired.com

There is hope! Scientists have found a strain of worms able to digest polyethylene (along with their gut microbes) and turn it into glycol and fat. Please let this insight pan out…

https://www.wired.com/story/could-plastic-eating-moth-larvae-be-a-solution-to-environmental-pollution/

I have a constant battle with the single-use plastic I purchase that, yes, keeps my food safe, but that I dread throwing away, since the recycle icon is an industry-fake-out as single use plastics are almost never truly recyclable. Worms could be one part of the solution…

Sleeptime Compute and AI forgetting? (WIRED)

The issue of short and long term memory is an increasingly interesting problem in generative AI. Maybe sleep is an important consolidation function for computers as well as humans …

https://www.wired.com/story/sleeptime-compute-chatbots-memory

 

UCHealth Biobank breaks new ground in personalized genomic medicine (news)

Our smart colleagues at UCHealth Biobank delivered the 1 millionth pharmacogenomic result into our Epic EHR. In separate news, the Biobank also delivered our 1000th pathogenic variant (like BRCA).

Congratulations to the UCHealth Biobank team including Drs. Christine Aquilante and David Kao. See the linked articles:

Your drugs and your genes may not play nicely together. A UCHealth project aims to find out in advance.

https://www.uchealth.org/newsroom/biobank-at-the-colorado-center-for-personalized-medicine-uncovers-1-million-genetic-insights-to-improve-patient-care/

https://www.sciencedirect.com/science/article/pii/S2949774424009981

Our Biobank, in parallel work streams have

  1. Delivered their 1 millionth pharmacogenomic result based on Biobank testing of patients blood or saliva to detect drug-gene interactions to warn prescribers in our system to avoid drugs that may not play well with individual patient’s genetics. We believe we are now have the largest genome bank delivering these results for clinical care.
  2. Delivered over 1000 pathogenic variants (genomic risks like BRCA mutations for breast cancer and the like) so that patients can be aware and take preventive or screening actions.

The investments in this infrastructure began in 2014. Even though it seems like an “overnight success” this was more than 11 years in the making, yet another reason that long term, basic-science approaches should be part of our strategic scientific funding. These transformative technologies come out of years of blood, sweat and tears (pun intended) on behalf of our researchers and technologists and with the contribution of dozens of thousands of patients.

Congratulations to our hardworking, groundbreaking colleagues.

Grok-board: the 2025 update (instantly empathize and understand the EHR)

As tech accelerates, I wonder what has AI done for me lately? Sure it writes my notes, but does it really help me “grok” this patient? Here is my updated Grok-board design take for EHR’s.

I am worried. Just because our whiz-bang technologies can do lots of stuff, I ask: should it? I previously wrote about my Grok-board idea in 2024. In one short year, we have seen ambient notes, chart summarizers and agents take off.

Here, I still aspire to use an EHR that helps me “grok” the patient. Grok comes originally from Robert Heinlein’s “Stranger in a Strange Land” sci-fi novel from 1961, wow more than 60 years ago. He coined the term “Grok” meaning to “instantly empathize and understand.”

In my 2025 version, the left column of the EHR remains intact, with links to many useful parts of an existing EHR patient chart.

What does it mean?

  • Blue Column 1: The Patient story board. Links to patient details: demographics, allergies, primary care doc, care gaps, selected chart items.
  • Green Column 2: This Human: the patient telling us what is important to them, what are their joys and pressures.
  • Orange Column 3: Homunculus visual problem list and an AI summary of the current status of diseases with any available metrics.
  • Pink Column 4: Insights. Active and suggested AI agents, artificial intelligence entities that can obtain data, and then act on predetermined criteria to achieve a general goal, assess the patient’s top risks and suggested next steps.
  • Blue Column 5: Today: pre-visit questionnaires, solicited patient questions, the last progress note you wrote, and an AI Greek Chorus of somewhat adversarial advisors with suggestions for me.

An ideal Grok-board

should be humane and emphasize the patient’s identity and goals, so a physician can connect human-to-human and communicate more effectively. It should also increase the signal-to-noise of the information presented. It should make it quick to grok and then act. It should prioritize the most important next actions, then make doing the right thing easy

Thanks to Gregory Makoul for his groundbreaking work on connecting patient values to physician thinking. Let’s continue to debate the psychology of which displays gives the quickest, most useful view of the patient. Let’s arm-wrestle, not over sorting of the problem list, but over which display leads to better care of the patient with less cognitive burden and more joy for physicians, the clinical team, and the patient.

Taking it further

Let’s design a video gamer’s chair that helps you use all your senses. What if your left forearm vibrates to warn you of a medication allergy when you’re writing a prescription? What if your right calf feels warm if there is kidney impairment or left calf feels warm if there is liver impairment affecting your next decision. What if your lower back vibrates if you’re about to close a chart with unaddressed care gaps. What if we had smell-o-vision to detect the fruity breath of a patient in keto-acidosis?

CMIO’s take?  We are underusing our senses and our pattern-matching skills. Let’s build a way for our physicians and teams to grok the patient and then make it easy to do the right thing. We must intentionally build our humanity into this future.

AI plus Human: strategies in Radiology (RSNA)

Radiologists, the vanguard of medical AI integrators, are working out strategies for optimal workflow between humans and AI. See what they propose.

https://pubs.rsna.org/doi/10.1148/radiol.250477

This is a great article about human/AI partnerships in radiology. One of the authors is Eric Topol.

We know several things so far about AI/human work in medicine

  • Humans can develop automation bias (radiologist performance gets worse when given suggestions by a poor-performing AI) because they are influenced by AI reading
  • Humans can improve performance if paired with high-performing AI
  • AI outperforms humans when it has very high confidence of “normal” or “abnormal”
  • We can reduce the burden of human work if we can put AI in places where it does best.

So, the authors suggest AI/Human role separation framework:

  • AI-first model (have the AI comb the chart for relevant data before the radiologist reads the study)
  • Human-first model (have the human read the study and the AI takes the direct read and writes the Impression, or the AI takes the human report and writes a patient-friendly report)
  • Case Allocation model 1: Rule out Normal (no human read if AI highly confident that study is normal)
  • Case Allocation model 2: Risk Based Allocation (low to intermediate risk cases only had a single AI reader, otherwise humans also read the higher risk cases)
  • Case Allocation model 3: Dynamic Complexity Based Allocation (if AI is highly confident of normal or highly confident of abnormal, use that to categorize work. Doing so reduces human work by 66% and reduces false positives by 25% while keep case detection the same

Really promising developments in thinking about the Human/AI partnership. My ongoing worries about automation complacency and bias are still there, but I like that smart people are thinking about possible solutions.