Thanks to KLAS research for highlighting my backstory, my inauspicious beginnings as Chief Complainer in the 1990’s.
Time for more data surfing! UCHealth’s overall visit volume (including in-person and video visits and scheduled phone visits) have recovered about 80-90% of pre-pandemic levels.
Today, we’re looking at visit volumes among different age groups of patients. Keep in mind, UCHealth is primarily an adult hospital. Our partner, Childrens Hospital of Colorado, sees most of the pediatric population regionally. We do have some pediatric practices, and of course our extensive family medicine primary care practices also see pediatric patients. This will explain the low volume of pediatric visits below. On the other end, only 3.9% of UCHealth patients are over age 85.
So, what happened to visit volume with each of these age groups?
Turns out, the curve for EVERY age group is similar! Green is age 40-65 and about 1/3 (our largest fraction) of our patient population. Fuchsia is 65-85 and our second largest, purple is 18-40, orange is under 18, red is over 85. The curves start at different points, but follow the same trajectories. That divot on the right side is Memorial day, clinics closed, so 4/5 of the weekly volume that week.
Here is the Home telehealth Video Visit volume! Some interesting findings here. You notice that fuchsia and purple switched places, meaning that a much higher proportion of 18-40 year old patients chose Video Visits compared to 65-85 year old patients. All the other curves stayed in their relative positions. Furthermore, EVERY age group had a proportional bump up in video visits, even those over 85! Finally, the video visit curve is falling back, about to 50% of the peak (so far). It will be interesting to track this in the coming month or 2 and see where we end up, after in-person visits are fully ramped up again.
CMIO’s take? Who knows? Another example to show that we are going to bed with a cliff-hanger every night. I wonder what happens next. The good news: I’m feeling good about having a better handle, even after a few short months, of what Covid-19 can throw at us. Ain’t data cool?
We are well into our fourth month of this pandemic. Looking at our graph, purple shows influenza B peaking in December, influenza A peaking in February, and leaving aside an artifactual spike in mid March, when we started co-testing for major respiratory viruses at the same time we started testing fro Covid-19 in earnest, all other viruses have dissipated. Then you see this impressive bump in Covid-19 illness, peaking in mid April, in our organization. Keep in mind, this is just POSITIVE tests for Covid-19 RNA in patients seen at UCHealth. Because we care for 1.9 million patients in Colorado, though, it is a reasonably large population sample. Furthermore, Covid-19 tests were SCARCE prior to mid March, and numerous patients were likely developing Covid symptoms in February (see below).
So, how has this affected our visits and our telehealth efforts? Purple shows you the dramatic dip with in-person outpatient visits, and the gradual climb back toward baseline. Then there is the green line of home telehealth video visits, going from nearly nothing to about 20,000 weekly in early to mid March, with gradual falling off in the past 8 weeks and it seems we might stabilize near 10,000 visits weekly. This is still about 100x the volume of video visits prior to the pandemic.
Then there are the other trend lines that are interesting: Red is the ongoing volume of Patient messages before and during the pandemic. Leaving aside the bump in mid May (not sure why: perhaps related to a system-broadcast), our baseline of 22,000 messages per week increased to 30,000, about 33% increase in volume, starting to rise on Feb 22. This pre-dated by THREE WEEKS the steep decline of in-person visits and the upswing of telehealth visits on Mar 14, and the Colorado Stay at Home order of Mar 26.
Even more interesting: telephone volume in blue, saw a tiny bump on Mar 14, but then was unchanged during the entire period. By contrast, in fuchsia Scheduled telephone visits (billable as of mid March per CMS rules), appeared in early April.
In one graph, you can see: online patient messaging demand scaling up, phone calls being static, scheduled phone calls appearing when billable, on top of the change for in-person and video visits.
Some hidden factors at work here: UCHealth set up a Covid-19 nurse advice line; those calls are not visible on any line in this graph, and those hard-working nurses took tens of thousands of calls from Coloradoans (not just UCHealth patients).
So, this data dilettante has to ask, could an increase in online patient messaging (regardless of content of message) be another possible leading indicator for future pandemic surges? We can’t be sure if these messages were about general anxiety, Covid symptoms, or perhaps completely unrelated, but it is suspicious that there is a sustained increase in volume of messages by 30%+ since mid-March. On the other hand, why isn’t online message volume falling, like home telehealth visits are falling, now that clinics are opening up in-person appointments? Stay tuned!
The open question now is: what will CMS (Centers for Medicare/Medicaid Services) do with paying for Video visits and scheduled Telephone visits? Will those payments stop or scale back? This will certainly affect all health systems still heavily relying on Fee for Service, until the rise of Value Based Care (insurance plans paying for Quality instead of Volume) takes over.
CMIO’s take? These are unprecedented times, and patient behavior and health system behavior is fascinating. A tiny RNA virus has changed the way (phone, online, in-person) patients and healthcare providers interact. What comes next?
Nope, did not use the word “pandemic” or “Covid”.
Searching Youtube for “Covid songs” gets you this: https://www.youtube.com/results?search_query=covid+song
Which is an entirely unreasonably long list; there are some great selections there. I’ll leave you to browse.
During pandemic, I’ve been learning clawhammer style, from this guy:
Makes my uke sound more like a banjo. Weird, and cool.
Meantime: Our clinics are getting back to business; our patients are returning to in-person care, our visit volumes are back up, past the 80% mark. I hope you are all staying safe; we’re not out of this yet, but it is starting to feel less like a sprint and more like a marathon. Take care of yourself, get some exercise, bring back a hobby or two.
Thanks to those of you who caught my non-displaying graph images, I’m reposting now converting my original PNG to JPG. Please let me know if you can see these and follow the reasoning below! (edited 6/15, CTL)
Thanks to Brendan Drew, one of our data scientists, who is diving into the analysis of Leading Indicators, for the graphs and reasoning below. If I can twist his arm for more graphs, will pass them along.
If you recall, I discussed this recently: the idea that, our future is uncertain. Even though we have survived the first wave of the Covid-19 pandemic, we are concerned about possible future waves. How might we prepare?
If you don’t know this about me already, I find “making the sausage” in informatics and data science fascinating. Here are some intermediate steps we are taking beyond my “data dilettante” days as we search for signal in the noise.
These are all COVID-19 new codes. Firstly, note that ORANGE line R68.89 , orange shows up WAY before March. Turns out, this is not only “suspected Covid-19” it is also “Other Symptoms and Signs” previously in the ICD10 dictionary. So, that is a terrible signal. Then, RED line Z20.828 “Close exposure to COVID-19” is also “Exposure to influenza”. Hmm. Then, BLUE line B34.2 “Coronavirus Infection” is also “Coronavirus, unspecified.” Also Hmm. Only GREEN line U07.1 “Coronavirus identified” is highly specific for COVID-19 in the graph.
So, how do we make sense of this?
First, we take ONLY hospital patient codes for CONFIRMED (BLUE) versus SUSPECTED Covid (ORANGE), and we see that the BLUE CONFIRMED line shows two peaks, whereas ORANGE, there is no real signal there at all. GREEN is adjusted for Market Share based on 2019 data for that zip code (we are trying to localize prediction to the Zip code level).
Now, we compare zip codes. Blue line is 80011, Aurora near University of Colorado Hospital, a relative hot spot in Denver Metro region, and orange is 80634, the hot spot near Greeley hospital, and we see a temporal difference in the onset and peak of Greeley being earlier than Aurora. Interesting.
Here is where it gets tantalizing, and we have to hold back our excitement: Pair up the outpatient symptom data with the inpatient hospitalization rate for Confirmed Covid. Here it is for Aurora, x-axis lined up by date:
Those of us who cannot contain our excitement will see a visual rise in RED (outpatient symptoms suspicious of COVID, like fever, cough, shortness of breath), in the 80011 zip code increasing about 2 weeks BEFORE the corresponding rise in COVID-19 cases at University of Colorado Hospital in Aurora (also 80011). We WIN! Right?
Also, here’s the corresponding graph for Greeley:
This is a bit messier: what is that symptom peak in February? There is no corresponding COVID hospitalization peak in Feb/Mar. BUT, the symptom peak in mid March DOES correspond to a rise and peak in late March, and all of April.
My theory: mid February was probably Influenza A, and we did NOT track hospitalizations on our graph for that, AND the COVID confirmed codes did not get implemented until mid March, and maybe NOT attached in retrospect to patients who MIGHT have had COVID, but were admitted BEFORE those codes went into effect. This is harder than it looks!
Are you looking for a final answer? SORRY! We are still cranking away at this. Even though we humans have frontal lobes that CANNOT WAIT to see patterns (even where there is no pattern!), we have to resist that urge. AND, how do you teach an algorithm (even if there IS a pattern here), to tell us: YES you should pay attention to THIS rise in the data, but THAT ONE is just random noise.
For example, imagine the 80011 graph prints out one day at a time, moving to the right. At what point, would you tell the algorithm to alert us: YES it is TIME TO BRING IN MORE DOCS AND STAFF FOR THE NEXT SURGE.
Would it be: March 15, when there is an uptick? But there are lots of upticks just like that. March 22, a week later, when the line is DOUBLE of the average from 0.0007 to 0.0014?
AND, worse yet, UCHealth is only one of 5 health systems in Metro Denver and across the state of Colorado. Will cases come to US or to other health systems? What will the peak be? Will it be a tiny peak? (Hey, CT, why did you call all of us in here for these dozen patients?) Will it be a HUGE peak (Hey, CT, you didn’t raise enough of an alarm, there still aren’t enough of us).
Finally, signal to noise MIGHT be easier for the summer months when Influenza is done, but what about the fall when Influenza B and many other viruses are back in action? What about seasonal allergies during spring and summer that might kick off cough and shortness of breath?
CMIO’s take? Figuring out Leading Indicators is HARD. If YOU have this figured out, let us know. We’re still working on it. But the math and the figuring-it-out is pretty fascinating in the mean time.
For fun, I’ve set my Zoom background with an actual vintage 1997 photo I took of the medical records room in the basement of University of Colorado Hospital on Ninth Avenue in Denver (back when giants walked the earth). This aisle featured 6 stacked rows of medical record charts AND piles of paper record folders ON TOP since we were out of room (not shown). This was one of 29 aisles of records in the Records Room, holding ONLY the latest 3 years of records: the rest were retained (for 27 years) in a downtown warehouse.
Fun fact: we turned down lots of innovation partnerships and offers of free services because the medical information locked in those paper records was too difficult to pull out:
- We have a Pulmonary Function mobile van parked out front: send us all your patients who currently smoke and we will screen their lung function for free!
- Hey, our insurance company will pay you a bonus payment if you can prove all of the patients who have had a previous heart attack are taking aspirin! (true story, a clinic trying to prove this using paper medical records and clerical staff paid more gathering the data than they received in bonus money)
- Quick: the mobile mammogram bus is coming next week: let’s call all our patients who are due for mammography screening!
- We have a new diabetes educator visiting for a couple weeks! Can we contact all our patients with diabetes to come for a free visit?
- Uh, oh! The medication Bextra is being recalled by the manufacturer; quick: call all our patients taking that medication! (True story: 1/2 of our clinics were able to run a report on our EHR at the time and call affected patients immediately; the other half, still relying on paper records, had to say… “well, when the patient calls for a prescription refill in a few months, THEN we’ll tell them…”)
Fortunately, it is simple in our current EHR to run ad-hoc reports to do all this now. Whew! And, we can do predictive analytics on this data to save lives that would have blown my mind back then.
Here’s another flashback:
THIS is the Medical Records intake room, back when we were ONE hospital, 40 clinics (we’re now 12 hospitals, 800 clinics). On average, 6 vertical feet of paper, received EVERY DAY. Fifty medical records staff, filing, sorting, pulling, sending, receiving, creating new charts. And, still, we were 2 WEEKS behind on filing.
We had over 20 transcription services, all local, receiving tiny tape-recorder dictaphone tapes, transported by COURIER from the doctors dictating. As an aside, some of us remember hearing doctors mumbling their ultra-fast, only partly understandable dictations walking the halls between patients. On average, outpatient transcriptions took about 2 weeks to complete and print out, mail, and file back into the record. Inpatient daily transcriptions were ordered STAT for 3x the cost and typed same day, arrived by urgent courier in the late evening and taped into the paper chart.
For the record, here’s a paper progress note I wrote in 1999 on “non-carbon paper” sending the original copy to Hospital Medical Records, and then keeping the yellow copy in a “shadow chart”: a duplicate set of medical records kept in our “off-site clinic” because … we could not count on Hospital Medical Records to pull the relevant charts for clinic patients scheduled each day.
Don’t even get me started on our appointment scheduling system. “Oh yes, thanks for calling! So you’re looking for Dr. Lin’s next available appointment? Sorry, nothing for the next 3 weeks. Oh, you’d like to see the next available doctor? =sigh= OK I’ll pull down the other twelve 3-ring binders, one for each doctor, and see who might have an open spot.”
Are you keeping track? 50 medical records staff at the hospital to maintain Main Medical Records, and 1-2 additional medical records staff at EVERY clinic (about 40 clinics) to keep a shadow chart. Because we don’t trust each other to keep track and deliver records on time!
Oh, and meet this guy. In 1997, our medical information (see: x-ray films, paper medical records, dictaphone tapes) moved at the speed of rush-hour traffic on Colfax Avenue. Seven miles each way, 12 leased buildings throughout metro Denver. Two round trips every day.
With all this person-power and effort, the result? On a typical clinic day, I would see about 18 internal medicine patients. Main medical records would successfully deliver charts for about 9 patients. Our clinic’s shadow chart system would deliver charts to my exam room for about 6 additional patients, leaving, on average THREE patients with NO CHART. Just a piece of non-carbon paper, with handwritten vital signs and a list of patient-reported allergies that day. Mind you, there was no such thing as a clinical computer system at the time. As a result:
“Hi Doc! It is great to see you! What did my cardiologist tell you about me when he saw me 2 weeks ago and did all those tests? He said that I should come talk to you about his report.”
“Um. I don’t have any of your records today. I see your blood pressure looks good and that you report no allergies to medicines though.”
“What?! I made this appointment to go over his report! That visit was 2 weeks ago!”
“Yes. Um. What condition, exactly, do you have? Why did we send you to my cardiology colleague? What do you remember that he told YOU? Can you help me out here?”
“This is disappointing. You mean you really have nothing on me? Do you at least have the blood test results or the echo result?”
“Um, no. I’m really sorry about this. Okay, tell you what, no charge for today, my apologies for wasting your time and I will call you later this week after I call and yell at my medical records people and maybe get your chart and see what it says.”
“Whatever. You guys should really get your act together. Okay, can you at least go ahead and refill those 3 medicines that you prescribed for me from last year? I’m about out.”
(Excitedly taking out prescription pad) “Sure, I’m happy to! Do you happen to remember the names of the medications and the doses and what they’re for?”
Let’s not even talk about loading up a 2-foot-tall stack of medical records in our arms, walking out to the car, throwing them in the trunk, driving home and dictating late into the night, and hopefully remembering to bring them back into the office the next day.
And, if there was an urgent need for a particular medical record? We would routinely have a couple staff members wandering the clinic, from office to office, desk to desk asking: “Do you have the chart for Peterson, Mary, or Smith, Joseph, or Samuels, Jane?” and thus not answering the phone, or rooming patients…
Of course, by contrast, with our current EHR, tap-tap-tap: instant access to any patient record.
Yesterday, for example, my patient met her oncologist to discuss a new diagnosis of metastatic cancer. Today, I was able to read her consulting note, review the pathology from a recent biopsy, refresh my education about peritoneal carcinomatosis in an EHR-linked online textbook, secure-chat and then phone call with the oncologist about prognosis and treatment options, set up a video visit with the patient and her family, and have a have a well-informed, thoughtful conversation about her next steps.
This speed and coordination would not have been possible in the era of paper charts.
Not as cool as Jimmy Fallon’s Thank you Notes
Wait! One more thing! Remember the good old days when we received faxed blood test results and then had to notify patients by writing a STACK of folded post cards? I faced a stack of these EVERY EVENING at the end of clinic. Please don’t ask me how many times a patient brought back a post card saying: “Um, this looks pretty important, but, I think you meant to send this to a different Peter Smith. I haven’t had a blood test in awhile.”
Our patient Portal, we call My Health Connection: we release test results to the patient online, and then send comments with our interpretations, arriving to the patient’s inbox instantly. Comment from my patient? “It feels like I have my doctor in my pocket. So cool.”
CMIO’s take? All y’all don’t know how good you have it.
On the other hand, are you old, like me? Do you remember those days?
On the third hand, in another decade, I hope folks will look back to TODAY and marvel how much better the future is.
A rural lab has a 120-year history of fighting mysterious diseases.
— Read on www.nytimes.com/2020/05/07/opinion/coronavirus-rocky-mountain-laboratories.html
I did NOT know that these beaked masks were full of theriac, a mixture of 55 herbs, intended to cleanse the air before the plague doctor breathed in.
AND that plague doctors carried long rods to maintain distance from others. I wonder where I can order MY “social distancing rod.” Can’t find one on Amazon, although this might do.
The article is fascinating. I’ll take a break from EHR pontification today.
If you’re here to understand some of the challenges of antibody testing with Covid-19, read on. Be warned: math ahead. What’s the TL;DR?
- We don’t know whether having antibodies indicate that a person is IMMUNE to future re-infection now or later with Covid-19.
- We don’t know whether having antibodies mean that a person is NO LONGER infectious to others with Covid-19.
- We’re going to discuss Sensitivity, Specificity, Positive Predictive Value, Negative Predictive Value and say that MOST antibody tests out there may show Sensitivity and Specificity in the 80 or 90% range (seems good!)
- BUT because the Prevalence of the disease is unknown and likely low (single digits or teens maybe), the Positive Predictive Value is likely to be TERRIBLE, meaning a positive result might just be … meaningless or WRONG MOST OF THE TIME (OMG).
Some of you know that my father is a statistician, so he is likely to read this uncomfortably and have lots of concerns about the accuracy of my statements. You may also know the quote popularized by Mark Twain:
There are three kinds of lies: Lies, Damn Lies, and … Statistics
But, here goes anyway; the point is important, it is bothering me, and I want people to know at least what little I understand.
Let’s say that an antibody test is 95% sensitive (meaning, for patients who really had COVID-19, it shows Positive for 95% of them), and 92% specific (meaning, for patients with NO Covid-19 prior infection, it shows Negative for 92% of them). Seems like a good test, if you look at it from an omniscient being’s point of view: you already KNOW who has and doesn’t have the disease, and you’re just waiting to snicker at how well the tests turn out.
The trouble is when you turn things around the other way, from a patient’s point of view. You SHOULD NOT CARE what sensitivity and specificity are. You SHOULD CARE what Positive Predictive Value and Negative Predictive Value are.
Okay, now some of you are having hot flashes, or shaking chills, or whatever your reaction was to taking Statistics in high school or college or medical school (or all 3). Imagine also, that your father also knows most of the people teaching your classes because of his professional network, and you’re worried that your grade on this test will reflect poorly not only on you, the son of a statistician, but on your father, your family, your entire lineage. Good, now you’re getting me.
Negative Predictive Value (NPV) is the likelihood that if your test is Negative, it is correct, and you don’t have antibodies.
Positive Predictive Value (PPV) is the likelihood that if your test is Positive, you DID have the disease and now have the antibodies.
Okay, here’s the setup, AND I AM NOT CLAIMING THESE ARE REAL STATS, this is just an exercise. Let’s hypothesize:
- We will test 100,000 people
- The prevalence of disease is 3% (3 of each 100 have the disease in our population)
- Sensitivity of our antibody test is 95%
- Specificity of our antibody test is 92%
See the table as we calculate this:
|Test Result||COVID past or present||No Covid||Total|
The NPV equals “true negative / (true negative + false negative)”, or 89,240/(150+89,240) or 99.8%. In a population with very few Covid-19 infected patients with antibodies, you’re going to be right MOST OF THE TIME, to find “no antibodies” in most patients. So far so good.
The PPV equals “true positive / (true positive + false positive)”, or 2,850/(2,850+7,760) or 26.8%. What?
This means that PPV, or chance that a POSITIVE antibody result is CORRECT is 26%. So, if you take an antibody test in this population and your result is POSITIVE, then there is a 74% chance THAT TEST IS INCORRECT. Can you imagine? “Here is your result, Sir, your antibody test is Positive, but 3/4 of the time that is wrong.”
So the test we’re describing above, with the above assumptions, is helpful when the result is NEGATIVE (right 99% of the time) but NO HELP AT ALL (wrong 74% of the time) if the test is POSITIVE. Got it?
Okay, lets try a second scenario. Let’s hypothesize:
- We will test 100,000 people (same)
- The prevalence of disease is 3% (3 of each 100 have the disease in our population) (same)
- Sensitivity of our antibody test is 95% (same)
- Specificity of our antibody test is 99.5% (DIFFERENT)
Here is our new table:
|Test Result||COVID past or present||No COVID||Total|
NPV is the same: still 99.8% accurate. A negative is pretty good.
PPV is now: 2,850/(2,850+485) = 85.4%.
Therefore! Pushing this antibody test’s performance up to 99.5% specific makes a HUGE reduction in the number of False Positives, and makes it so that a Positive test for a patient is going to be right 85% of the time! Not perfect, but way better than 26%.
See what I mean? Moving from Sensitivity and Specificity to NPV and PPV make a really big difference when it comes to thinking “should I get this test” and “can I trust the result?” Maybe don’t rush right into getting your test until you chat with your doctor about how well it performs, what it might mean, and truly how useful these tests are.
Right now, for example, at UCHealth, we are only recommending testing patients who wish to donate plasma for our research study to infuse antibody-rich plasma into critically ill Covid-19 patients. Over time, as we learn more, we’ll expand testing to more patients (soon).
Thanks to Ed Ashwood, Medical Director, Clinical Lab, University of Colorado Hospital from whom I “borrowed” much of this example.
CMIO’s take? Whew! Statistics is hard. Who knew that Dad was right about how important Statistics is? Please look on your fellow statistics geek friends with kindness, they’re making our world a safer place. And, be careful what you ask from an antibody test.
Virtual meetings are draining, and I’m on them up to 8 hours a day, even busier now with all the EHR modifications, keeping up with policy changes, what Covid-testing is available, how we admit, treat, discharge, follow, track patients.
At the ends of long hours, long days, long weeks, our nerves are frayed.
I’ve observed that interactions between people have everything to do with the interpersonal skills of the individuals. Sometimes the conversation does NOT go well. Whether it is by email (worst for crucial conversations), by phone (slightly less bad), by online video meeting (slightly less bad) or in person (best, when possible), it is certainly worsened by the pandemic situation.
I’ve been taking a Story Skills Workshop (by Seth Godin and Bernadette Jiwa) that recently concluded. I have to say that I’ve learned quite a lot, and not what I was expecting to learn. I highly, highly recommend it. Seth and Bernadette offer a series of online lessons, released over time. There are about 6 expert coaches, and the instruction is to sign up for an interest group or ‘accountability group’. You’re given a story structure (the 5 C’s: Context, Catalyst, Complication, Change, Consequence) and then specific lessons to write and polish specific elements of your own story in this framework. The cool part is the instruction to ‘first write your own story, and then go comment on at least 5 others.’
- I learned that it is possible, in an online-only course, to develop a sense of community and collegiality in a short 30 days.
- I learned that it is crucial to be gentle in first contact with others online. For example, when giving feedback on others’ stories, DO NOT start right in with ‘why don’t you add more Emotion to that moment in your story?’ You’ll learn (as did I) that conversation either stops or becomes defensive. Remember that online conversations carry ZERO nonverbal: no Kind tone of voice, no Friendly posture. All you see are the words, and it is automatic to imagine them coming from a frowning critic with crossed arms, shaking his brutish head. [Pause for self-reflection amongst my blog-readers, as well as from myself…]
- Instead, try something my theater-trained son taught me:
‘I like… I wish… What if …’My highly emotionally intelligent son
- Framing any response this way allows your recipient to hear something positive, then a neutrally posed concern, followed by a tentative suggestion. Having been on both sides of such a well-formed critique, I can say: it is EASY to write, doesn’t take longer, and on the receiving end FEELS COMPLETELY DIFFERENT. It FEELS like a close friend, reaching a hand over to pull you up to a higher step.
- FOR EXAMPLE: Take one of my story-critiques of a co-participant in the story workshop, not done well on my part: “Why don’t you add more emotion to your story? It reads like a timeline, but nothing about what you felt, or how that impacted you.” I thought I was clever, to point out one of the main points of that week’s lesson. What I received was… no response. Hmm.
- Rephrasing the reply using this framework, when I replied to a different participant’s story, sounded like this: “Hi, Joe! I liked your story, especially the unexpected part about running away from home at 16. I wish I could be there at that moment when you made the decision, everything boiling-over, and then a crucial moment. What if you paused in your story and told us what you were thinking and feeling right then? I would be riveted.” Guess what? We had a great online conversation after that, and he re-wrote his story, and I WAS RIVETED. Win-win.
CMIO’s take? Story telling: cool. Gentle, effective feedback: cooler. Don’t we all need to get better at this?