One of the things that could be improved is the level of understanding doctors have on risks and probability. A test by Deborah Bennett on the ability of doctors to interpret test results shows this. In the test a doctor is asked to estimate the probability that a patient will have a disease, given a positive test result. The doctor knows that the disease will strike once in every thousand people. Also the test used presents a rate of 5% false positives. People are tested at random, regardless of whether they are suspected of having the disease. What is the probability of the patient having the disease? Most doctors will say 95%, but this is highly overestimated. What did you think?
To get the correct answer use the fact that out of every thousand people who receive the test, one will have the disease and 999 will not. Anyone who actually has the disease gets a positive result, so no false negatives (this is complicated enough for the doctor, but if you like the challenge assume a detection efficiency of 80%). One out of every thousand tests is a true positive. The remaining 999 tests should have negative results, but 49.95 of these tests will also give positive result (the 5% false positive rate). So in summary we have 50.95 positive results in every 1000, but only one of these is a true positive. So one in every 50.95 positive tests identifies a person who actually has the disease, or 2%! Believe it or not, we just applied the Bayes theorem!
Note : Back to the Bayes Theorem. Just a few days ago a terrorist tried to blow up a plane flying from Amsterdam to Detroit. Questions were raised on the effectiveness of the intelligence agencies trying to identify terrorists. Surly this attack was a false negative in terms of testing. It is interesting to figure out what should be the effectiveness (low false negative rate) of a test applied to identify a terrorist with a high level of reliability, certainly in relation to the level of false positives this test might generate.