Great news this week, the Mexican flu epidemic is officially over in the Netherlands. In July of this year, the first cases of Mexican flu were reported. At that time the World Health Organisation (WHO) already had signalled that the danger of a flu pandemic was growing and every country was advised to take appropriate counter measures. One in every 3 persons would get infected if no counter measures were taken. In the Netherlands this lead to the “Grip op Griep” campaign informing people on how to reduce the chances of getting infected. With silly posters people were told how to sneeze and to disinfect their hands, reducing the chance of infection. Also people having symptoms of the Mexican flu were tested and if required vaccinated. Later on this was turned into a countrywide vaccination campaign for the elderly and the young. In the Netherlands, until this week, 51 people have died, 2168 people were hospitalized and 4 million people have been vaccinated. The vaccination alone costs around 300 million euros. When comparing the numbers of infected with the previous years, (see http://influenzanet.com/) you can question whether it was all worth the fuzz. You will see that the number of infections from Mexican flu is highly overestimated. Also the cost effectiveness of the campaign is questionable. Can’t this be improved with a little more analytical approach?

One of the things that could be improved is the level of understanding doctors have on risks and probability. A test by Deborah Bennett on the ability of doctors to interpret test results shows this. In the test a doctor is asked to estimate the probability that a patient will have a disease, given a positive test result. The doctor knows that the disease will strike once in every thousand people. Also the test used presents a rate of 5% false positives. People are tested at random, regardless of whether they are suspected of having the disease. What is the probability of the patient having the disease? Most doctors will say 95%, but this is highly overestimated. What did you think?

To get the correct answer use the fact that out of every thousand people who receive the test, one will have the disease and 999 will not. Anyone who actually has the disease gets a positive result, so no false negatives (this is complicated enough for the doctor, but if you like the challenge assume a detection efficiency of 80%). One out of every thousand tests is a true positive. The remaining 999 tests should have negative results, but 49.95 of these tests will also give positive result (the 5% false positive rate). So in summary we have 50.95 positive results in every 1000, but only one of these is a true positive. So one in every 50.95 positive tests identifies a person who actually has the disease, or 2%! Believe it or not, we just applied the Bayes theorem!

One of the things that could be improved is the level of understanding doctors have on risks and probability. A test by Deborah Bennett on the ability of doctors to interpret test results shows this. In the test a doctor is asked to estimate the probability that a patient will have a disease, given a positive test result. The doctor knows that the disease will strike once in every thousand people. Also the test used presents a rate of 5% false positives. People are tested at random, regardless of whether they are suspected of having the disease. What is the probability of the patient having the disease? Most doctors will say 95%, but this is highly overestimated. What did you think?

To get the correct answer use the fact that out of every thousand people who receive the test, one will have the disease and 999 will not. Anyone who actually has the disease gets a positive result, so no false negatives (this is complicated enough for the doctor, but if you like the challenge assume a detection efficiency of 80%). One out of every thousand tests is a true positive. The remaining 999 tests should have negative results, but 49.95 of these tests will also give positive result (the 5% false positive rate). So in summary we have 50.95 positive results in every 1000, but only one of these is a true positive. So one in every 50.95 positive tests identifies a person who actually has the disease, or 2%! Believe it or not, we just applied the Bayes theorem!

P(Ill given Positive) = P(Positive given Ill) * P(Ill)/P(Positive) = 100%*0.1%/5.1% = 2%. Note the dramatic effect on cost if the test results are interpreted the wrong way. It will lead to unnecessary treatment of 50 people in every 1000. 51 times more budget is spent than required, assuming that the 50 false positive patients do not exhibit negative effects as a result of their treatment that require further medical. With this in mind, you can question whether the decision to vaccinate 4 million people is a good one. Recent research from the Groningen University (See Robin de Vries) shows that the current models to estimate cost and effectiveness of preventive vaccination by the Dutch government need improvement. I am convinced that more and better applied analytics will help.

**Note : Back to the Bayes Theorem**. Just a few days ago a terrorist tried to blow up a plane flying from Amsterdam to Detroit. Questions were raised on the effectiveness of the intelligence agencies trying to identify terrorists. Surly this attack was a false negative in terms of testing. It is interesting to figure out what should be the effectiveness (low false negative rate) of a test applied to identify a terrorist with a high level of reliability, certainly in relation to the level of false positives this test might generate.