I have been looking recently at the various merits of Mantoux testing versus the newer Interferon Gamma Release Assays (IGRAs) for the diagnosis of latent TB infection. Whilst the IGRAs (such as T -spot and Quantiferon Gold) are not perfect by any stretch of the imagination they still seem to have difficulty replacing the long entrenched Mantoux test.
The Mantoux test, which has been in existence for well over a hundred years suffers from various problems. False negatives due to immunocompromise, false positives after BCG, inter-observer variability in measuring the results, and the logistics of administration, just for starters.
One wonders if Mantoux were a new test invented today, whether a test with so many deficiencies and subjectivity would get anywhere near the commercial market. The validation requirements for new tests as stipulated by accreditation agencies are much stricter now than they were 100 years ago, 50 years ago, or even 20 years ago.
I think we are applying old rules for old tests to a certain extent. Mantoux testing is an institution, a tradition, and is what a lot of us are used to. However because it was acceptable testing in previous generations does not mean it is acceptable by today’s standards.
I don’t think IGRAs will need to improve too much more before Mantoux testing ends up as a historical test, and PPD is kept in museums and not laboratories….
I know it is difficult to believe but I was doing a little bit of background reading recently on Monospot tests (looking for heterophile antibodies to Epstein Barr Virus). On reading a guideline I came across this statement…
“The presence of heterophile antibodies in a symptomatic adolescent or young adult has a sensitivity of approximately 90%, and specificity of almost 100% for glandular fever.”
Now the question I have is what is wrong with this statement?
The answer is that sensitivity and specificity are functions of the test itself. Different tests for the same disease from different manufacturers may have different sensitivity and specificity.
However once you start applying the test to population cohorts such as symptomatic adolescents, then you need to start talking in terms of positive and negative predictive value.
….and the paradox is when you use a test such a Monospot, with a sensitivity of approximately 80%, in a high prevalence population such as symptomatic adolescents, then your negative predictive value will be relatively lower than in a low prevalence cohort, as there will be a significant amount of people who will test negative who actually have the disease (false negatives).
Sometimes we need to think about the science behind the statements in the guideline and make sure it makes sense in our heads.
Don’t believe everything you read, (especially when it is written by me!)
For a really nice presentation on sensitivity, specificity, PPV etc click here (5-10 minute read)
I have been thinking about hepatitis serology recently and more particularly, best practice when trying to diagnose a viral hepatitis using serological testing.
There are several viral causes of hepatitis, such as Hep A, B, C, D & E, Epstein Barr virus, cytomegalovirus, and HIV. (Toxoplasmosis often included in this group as well, even though not a virus!)
….and that is even before you get started on the more esoteric viral causes of hepatitis.
However all these viruses have varying clinical presentations, different incubation periods and particular risk factors. Some are acute and some are chronic.
I therefore find it a little frustrating when the request form asks for “hepatitis serology” without specifying the particular viruses that require testing, along with the clinical rationale.
As a laboratory profession, I don’t think we do ourselves or our patients any favours by accepting non-specific requests such as “hepatitis serology”, “viral hepatitis screen”, “hepatitis screen” etc etc. It is esssentially encouraging poor practice.
“Hepatitis serology” is not really a test request. It is more of a chapter in a textbook…..