The more uncertain the result will be, the more useful the laboratory test generally is…
Sounds a little paradoxical, but it is absolutely true.
If we are looking to confirm something that is almost certain before the lab test is performed, then we need a “super-sensitive” test to fulfil this task. Otherwise we run the risk of giving false negative results.
For example, if we have a teenager with a sore throat and lymphadenopathy, a lymphocytosis and atypical lymphocytes on blood film, then the probability of this being EBV infection is about 90%. There is little point then in doing a confirmatory Monospot test with a sensitivity of 80-85%. This will only lead to giving negative results on patients who actually have EBV infection.
And if we are looking to diagnose a long shot (aka a very unlikely diagnosis) then we had better be sure our laboratory test is “super-specific”, otherwise we will run the risk of giving false positive results.
For example if we want to diagnose dengue fever in a patient with “flu like” symptoms returning from Mexico (an area of relatively low Dengue endemicity), then we need to think twice about performing Dengue serology testing which has a specificity rate of about 95%. You are just as likely to report a positive test in someone who doesn’t actually have Dengue.
What we are doing in actual practice here is taking our pre-test probability, and using it to give a prevalence rate (by proxy) in our tested population. Once we know this, then we can use our test sensitivity and specificity to calculate positive and negative predictive values, not always with the results we would like…
Laboratory specialists tend to be more aware of testing limitations such as these. Clinicians, in general, tend to just take the laboratory results as gospel.
But I believe it is ultimately the laboratory’s responsibility to stress the limitations of using laboratory testing for “Dead Certs or Long Shots”, and either prevent such testing taking place, or put big disclaimers on the results.
Michael