The Tricky Conditional Probability
Date: 23 January 2021
Recently, I received an image, saying the Covid-19 test accuracy for positive cases is 93% and that for negative case is 99%. Does it say anything good or bad about the Covid-19 test? Are 93% and 99% good? Or is it bad that the test missed 7% of the positive cases?
If we think deeper, what we care is that if the test is positive (sick), what is the probability that the person actual got sick (Covid-19)?

Let us refresh our high school memory on conditional probability. The probability of getting Covid 19 (sick) is by dividing the total number of cases by HK population.

The probability of being sick given the Covid test result is positive is 93.8%. This number doesn’t sound interesting. But if you look at the formula of P(A|B), you will notice two things:
- If P(A) is zero, a positive test result means nothing. Imagine Covid 19 is eradicated from the Earth, you really don’t care too much about the test result.
- If false alarm is common, P(A|B) will drop. If P(B|A^c) is, say, 1%, then P(A|B) will drop to 60%. It is actually quite intuitive since being test positive can be just a false alarm.
So, the take away is when someone reported the accuracy of a test, you need to calculate the conditional probability [for example, P(sick given positive test result)] to better understand whether the test is useful or not. Simply knowing P(positive test result given sick) and P(negative test result given not sick) is not enough. Maybe reporter needs to get train to report conditional probability instead. Haha.
Comments ()