Can too much testing make a Pandemic worse?
Victorian premier Daniel Andrews has announced that COVID-19 testing will be ramped-up to 100,000 per fortnight. Other political leaders are boasting of their massive testing capacity.
The interweb in all its many forms has been clamouring for this for quite some time and large parts of society are hanging their metaphorical hats on testing being the pathway to ending lockdowns.
But can too much testing be a bad thing?
Too right it can, because of our pesky friend the false positive.
Small scale testing on symptomatic patients.
100 highly symptomatic patients present to our Emergency Department; 50 test positive and 50 test negative. However of those 50 positive tests, 2 were not actually sick - these are our false positives. Of our 50 negative tests, one result was wrong and this person was actually sick - a false negative.
Our test is very effective and enables 97 patients to be handled correctly. 49 correctly receive no treatment and 48 receive the correct treatment.
The two False Positives are often not much on an issue either, as long as the treatment is not very invasive or expensive. In a week or so they will still show no symptoms, be retested and sent home as miracle cures. They will write blog entries about how prayer alone or UV light saved them. They won't sue, because they are just happy to be alive.
The False Negative is a genuine concern. This person will appear on an evening Current Affairs show, they will be that person sent home who fell ill later. They will sue the hospital. The TV audience will lose confidence in health care and in testing. Their case will be fuel to the Facebook groups of alternative cures, herbal tinfoil wearers and wobble bearded purveyors of nonsense. It will all be some sort of conspiracy or other purely designed to discredit rosewater or thistle-juice.
But 97% is a very good test, nothing can be perfect.
So let's test everyone?
Large scale testing on non-symptomatic patients
When we roll our 97% effective test out to the general public we don't always get the lovely clean data we expect. We don't always generate confidence and calm in the general public. Actually the opposite can happen.
Let us presume that our disease has infected 1% of the population.
We are going to test 10 million people.
We therefore know 1% of 10 million people are ill; or 100,000.
We first of all know that we will have 1% false negatives. So we will not find all 100,000 cases - we will find 99,000.
We also know that 2% of our 100m will be False Positives; that translates to 200,000 false positives.
The headline is of course those 9.7m negative tests. Lots of confidence in the community?
Think for a moment of the real-world implications. We have 1,000 people who need care but do not get it. That is a flood of newspaper reports, talkback radio and you not being able to read social media without being waist-deep in conspiracy theories and fear mongering.
With that many false negative tests will the general public be anything but fearful or cynical?
There are also more False Positives (200k) than True Positives (99k). We are falsely reporting a disease incidence of 3%; not 1%.
People who are told they are positive are only a 33% chance of actually being positive. We may have all our Emergency Department beds full predominately of people who have no symptoms.
Again we have a large corrosion of faith in the effectiveness of testing .
So less is more right?
There are rarely simply answers in data science. More testing is probably a very good idea but unless the testing is 100% correct (and it isn't) then we could well over-report the problem and not see the data we are expecting.
A real world implication of this is breast cancer screening. In that case the test works very well for women with risk factors (age, family history, detecting abnormalities etc) but becomes problematic as more and more women get tested.
This excellent article touches the surface of this curious statistical anomaly https://www.nytimes.com/2014/05/07/upshot/universal-mammogram-screening-shows-we-dont-understand-risk.html