Tag Archives: negative predictive value

“Dead Certs and Long Shots”

The more uncertain the result will be, the more useful the laboratory test generally is…

Sounds a little paradoxical, but it is absolutely true.

If we are looking to confirm something that is almost certain before the lab test is performed, then we need a “super-sensitive” test to fulfil this task. Otherwise we run the risk of giving false negative results.

For example, if we have a teenager with a sore throat and lymphadenopathy, a lymphocytosis and atypical lymphocytes on blood film, then the probability of this being EBV infection is about 90%. There is little point then in doing a confirmatory Monospot test with a sensitivity of 80-85%. This will only lead to giving negative results on patients who actually have EBV infection.

And if we are looking to diagnose a long shot (aka a very unlikely diagnosis) then we had better be sure our laboratory test is “super-specific”, otherwise we will run the risk of giving false positive results.

For example if we want to diagnose dengue fever in a patient with “flu like” symptoms returning from Mexico (an area of relatively low Dengue endemicity), then we need to think twice about performing Dengue serology testing which has a specificity rate of about 95%. You are just as likely to report a positive test in someone who doesn’t actually have Dengue.

What we are doing in actual practice here is taking our pre-test probability, and using it to give a prevalence rate (by proxy) in our tested population. Once we know this, then we can use our test sensitivity and specificity to calculate positive and negative predictive values, not always with the results we would like…

Laboratory specialists tend to be more aware of testing limitations such as these. Clinicians, in general,  tend to just take the laboratory results as gospel.

But I believe it is ultimately the laboratory’s responsibility to stress the limitations of using laboratory testing for “Dead Certs or Long Shots”, and either prevent such testing taking place, or put big disclaimers on the results.



“Don’t believe everything you read…”

I know it is difficult to believe but I was doing a little bit of background reading recently on Monospot tests (looking for heterophile antibodies to Epstein Barr Virus). On reading a guideline I came across this statement…

“The presence of heterophile antibodies in a symptomatic adolescent or young adult has a sensitivity of approximately 90%, and specificity of almost 100% for glandular fever.”

Now the question I have is what is wrong with this statement?

The answer is that sensitivity and specificity are functions of the test itself. Different tests for the same disease from different manufacturers may have different sensitivity and specificity.

However once you start applying the test to population cohorts such as symptomatic adolescents, then you need to start talking in terms of positive and negative predictive value.

….and the paradox is when you use a test such a Monospot, with a sensitivity of approximately 80%, in a high prevalence population such as symptomatic adolescents, then your negative predictive value will be relatively lower than in a low prevalence cohort, as there will be a significant amount of people who will test negative who actually have the disease (false negatives).

Sometimes we need to think about the science behind the statements in the guideline and make sure it makes sense in our heads.

Don’t believe everything you read, (especially when it is written by me!)


For a really nice presentation on sensitivity, specificity, PPV etc click here (5-10 minute read)