
If you’re here to understand some of the challenges of antibody testing with Covid-19, read on. Be warned: math ahead. What’s the TL;DR?
- We don’t know whether having antibodies indicate that a person is IMMUNE to future re-infection now or later with Covid-19.
- We don’t know whether having antibodies mean that a person is NO LONGER infectious to others with Covid-19.
- We’re going to discuss Sensitivity, Specificity, Positive Predictive Value, Negative Predictive Value and say that MOST antibody tests out there may show Sensitivity and Specificity in the 80 or 90% range (seems good!)
- BUT because the Prevalence of the disease is unknown and likely low (single digits or teens maybe), the Positive Predictive Value is likely to be TERRIBLE, meaning a positive result might just be … meaningless or WRONG MOST OF THE TIME (OMG).
Some of you know that my father is a statistician, so he is likely to read this uncomfortably and have lots of concerns about the accuracy of my statements. You may also know the quote popularized by Mark Twain:
There are three kinds of lies: Lies, Damn Lies, and … Statistics
But, here goes anyway; the point is important, it is bothering me, and I want people to know at least what little I understand.
Let’s say that an antibody test is 95% sensitive (meaning, for patients who really had COVID-19, it shows Positive for 95% of them), and 92% specific (meaning, for patients with NO Covid-19 prior infection, it shows Negative for 92% of them). Seems like a good test, if you look at it from an omniscient being’s point of view: you already KNOW who has and doesn’t have the disease, and you’re just waiting to snicker at how well the tests turn out.
The trouble is when you turn things around the other way, from a patient’s point of view. You SHOULD NOT CARE what sensitivity and specificity are. You SHOULD CARE what Positive Predictive Value and Negative Predictive Value are.
Okay, now some of you are having hot flashes, or shaking chills, or whatever your reaction was to taking Statistics in high school or college or medical school (or all 3). Imagine also, that your father also knows most of the people teaching your classes because of his professional network, and you’re worried that your grade on this test will reflect poorly not only on you, the son of a statistician, but on your father, your family, your entire lineage. Good, now you’re getting me.
Negative Predictive Value (NPV) is the likelihood that if your test is Negative, it is correct, and you don’t have antibodies.
Positive Predictive Value (PPV) is the likelihood that if your test is Positive, you DID have the disease and now have the antibodies.
Okay, here’s the setup, AND I AM NOT CLAIMING THESE ARE REAL STATS, this is just an exercise. Let’s hypothesize:
- We will test 100,000 people
- The prevalence of disease is 3% (3 of each 100 have the disease in our population)
- Sensitivity of our antibody test is 95%
- Specificity of our antibody test is 92%
See the table as we calculate this:
Test Result | COVID past or present | No Covid | Total |
Positive | 2,850 | 7,760 | 10,610 |
Negative | 150 | 89,240 | 89,390 |
Total | 3,000 | 97,000 | 100,000 |
The NPV equals “true negative / (true negative + false negative)”, or 89,240/(150+89,240) or 99.8%. In a population with very few Covid-19 infected patients with antibodies, you’re going to be right MOST OF THE TIME, to find “no antibodies” in most patients. So far so good.
The PPV equals “true positive / (true positive + false positive)”, or 2,850/(2,850+7,760) or 26.8%. What?
This means that PPV, or chance that a POSITIVE antibody result is CORRECT is 26%. So, if you take an antibody test in this population and your result is POSITIVE, then there is a 74% chance THAT TEST IS INCORRECT. Can you imagine? “Here is your result, Sir, your antibody test is Positive, but 3/4 of the time that is wrong.”
So the test we’re describing above, with the above assumptions, is helpful when the result is NEGATIVE (right 99% of the time) but NO HELP AT ALL (wrong 74% of the time) if the test is POSITIVE. Got it?
Okay, lets try a second scenario. Let’s hypothesize:
- We will test 100,000 people (same)
- The prevalence of disease is 3% (3 of each 100 have the disease in our population) (same)
- Sensitivity of our antibody test is 95% (same)
- Specificity of our antibody test is 99.5% (DIFFERENT)
Here is our new table:
Test Result | COVID past or present | No COVID | Total |
Positive | 2,850 | 485 | 3,335 |
Negative | 150 | 96,515 | 96,665 |
Total | 3,000 | 97,000 | 100,000 |
NPV is the same: still 99.8% accurate. A negative is pretty good.
PPV is now: 2,850/(2,850+485) = 85.4%.
Therefore! Pushing this antibody test’s performance up to 99.5% specific makes a HUGE reduction in the number of False Positives, and makes it so that a Positive test for a patient is going to be right 85% of the time! Not perfect, but way better than 26%.
See what I mean? Moving from Sensitivity and Specificity to NPV and PPV make a really big difference when it comes to thinking “should I get this test” and “can I trust the result?” Maybe don’t rush right into getting your test until you chat with your doctor about how well it performs, what it might mean, and truly how useful these tests are.
Right now, for example, at UCHealth, we are only recommending testing patients who wish to donate plasma for our research study to infuse antibody-rich plasma into critically ill Covid-19 patients. Over time, as we learn more, we’ll expand testing to more patients (soon).
Thanks to Ed Ashwood, Medical Director, Clinical Lab, University of Colorado Hospital from whom I “borrowed” much of this example.
CMIO’s take? Whew! Statistics is hard. Who knew that Dad was right about how important Statistics is? Please look on your fellow statistics geek friends with kindness, they’re making our world a safer place. And, be careful what you ask from an antibody test.