Tag Archives: pre-test probability

“The uncertainty of certainty”

There is one thing certain in the microbiology laboratory, that the results will be uncertain. This has nothing to do of course with laboratory systems or the competency of staff members. Just an acceptance that there is no such thing as a certain result…

The other thing to note is that the degree of certainty of results will vary between different tests, not only for separate tests but even for multiple tests contained in the one assay, e.g. any multiplex PCR.

Take for example a multiplex respiratory PCR, containing 24 or so different targets. (Most labs will “demand manage” such expensive assays, allowing them only for immunocompromised patients or the seriously ill. Nevertheless, such assays are becoming increasingly popular.)

In a multiplex respiratory assay, a positive result for rhinovirus is almost certainly going to have a greater chance of being “the genuine article” than a positive result for bocavirus.

This is because each individual target pathogen has a different positive predictive value (PPV), based on both its specificity and its relative prevalence in the tested population. As a result, positive predictive values for individual pathogens within a multiplex can, and do, vary greatly.

But how do we relate such information to the clinicians? Quoting the calculated PPV for each target in a multiplex would make for a long and complex laboratory report. I would not go there… It is probably best to use an appropriate comment for certain results. I.e. “Bocavirus is uncommonly seen in population x, therefore the positive predictive value of this result may be sub-optimal. Close clinical correlation is required.”

Of course, clinicians can increase the degree of certainty by clarifying the “pre-test probability”. I.e. A positive bocavirus result in a 6 month old during the winter season is much more likely to represent a true result than a positive bocavirus result in an adult during the summer season.

With multiplex PCRs, sometimes you are “forced” to perform a test, when it would be better not to know…

Clinicians, in general,  tend to believe that all laboratory results are certain, until we produce one that is very clearly wrong! After that, they will believe all results are uncertain until that trust is rebuilt over time.

To understand certainty of testing, you first of all need to understand the laws of probability. All a laboratory result ever does is convert pre-test probability of disease X into post-test probability. 

It neither confirms nor excludes…

Michael

 

“Testing ad infinitum”

Take for example the patient who presents with “recent weight loss”, even though their recorded weight is exactly the same as it was 6 months ago.

…or the patient who “bounds” into the surgery, looking far healthier than the doctor will ever be, and then complains of “tiredness”.

…or the patient who has diagnosed themselves with “Chronic Lyme Disease” on the internet, even though they have never travelled to an area endemic for Lyme disease.

The temptation for the clinician in such cases above is to order a whole battery of tests in order to prove to, or reassure the patient that they have no organic pathology.

The patient then leaves the clinic with a whole list of (often expensive) laboratory investigations, and thinks to themselves;

“Wow, look at how many tests I am getting. The doctor must be worried for me. I really must be sick!”

And thus the cycle goes on. The tests come back negative, but the sick role is now reinforced. The patient then often comes back for more, or goes off elsewhere to seek a second opinion…

Worse still, if enough tests are performed, then one will eventually come back falsely equivocal or positive, confusing the issue even further for the clinician. And the positive reinforcement of the sick role in the patient has just gone through the roof!

Everyone is scared of missing something, of not diagnosing that long shot… But sometimes it is best just to trust good clinical acumen, and appreciate that laboratory testing can occasionally cause more harm than good…

Michael

“Dead Certs and Long Shots”

The more uncertain the result will be, the more useful the laboratory test generally is…

Sounds a little paradoxical, but it is absolutely true.

If we are looking to confirm something that is almost certain before the lab test is performed, then we need a “super-sensitive” test to fulfil this task. Otherwise we run the risk of giving false negative results.

For example, if we have a teenager with a sore throat and lymphadenopathy, a lymphocytosis and atypical lymphocytes on blood film, then the probability of this being EBV infection is about 90%. There is little point then in doing a confirmatory Monospot test with a sensitivity of 80-85%. This will only lead to giving negative results on patients who actually have EBV infection.

And if we are looking to diagnose a long shot (aka a very unlikely diagnosis) then we had better be sure our laboratory test is “super-specific”, otherwise we will run the risk of giving false positive results.

For example if we want to diagnose dengue fever in a patient with “flu like” symptoms returning from Mexico (an area of relatively low Dengue endemicity), then we need to think twice about performing Dengue serology testing which has a specificity rate of about 95%. You are just as likely to report a positive test in someone who doesn’t actually have Dengue.

What we are doing in actual practice here is taking our pre-test probability, and using it to give a prevalence rate (by proxy) in our tested population. Once we know this, then we can use our test sensitivity and specificity to calculate positive and negative predictive values, not always with the results we would like…

Laboratory specialists tend to be more aware of testing limitations such as these. Clinicians, in general,  tend to just take the laboratory results as gospel.

But I believe it is ultimately the laboratory’s responsibility to stress the limitations of using laboratory testing for “Dead Certs or Long Shots”, and either prevent such testing taking place, or put big disclaimers on the results.

Michael