Antibody Tests: When 95% Accurate Isn’t Enough

Sign up for The Range Report newsletter

May 19, 2020

Welcome to the Range Report. The previous report went out just a few days ago, but this is a special, single-topic edition where I want to highlight a tool for critical thinking that is about to become really important. Let’s get to it.

 

WHEN 95% ACCURATE IS REALLY INACCURATE

Last week, a primary care provider offered to order me an antibody test that could determine if I’ve had COVID-19. I had a suspiciously timed dry cough in late February, so I’ve been looking forward to ruling COVID-19 in or out. Thus, my initial reflex was to get excited that I could finally get a test. Before agreeing, though, I asked the doctor about test accuracy. The result of that conversation was that I decided not to get the test. Allow me to explain.

The doctor told me that the test has greater than 90% accuracy, by which he meant that the rate of false positives is less than 10%. That sounds great. But it immediately reminded me of a study I described in the last chapter of Range, in which a question was posed to physicians and med students. Here it is:

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

The doctors and med students were also told to assume that the test had perfect “sensitivity,” i.e. that it detected all true positives. The most common answer the doctors and med students gave was that the patient has a 95% chance of actually having the disease. The correct answer is that there is only about a 2% chance that the patient actually has the disease, or 1.96% to be exact.

Say you’re testing 10,000 people. Because the disease prevalence is 1 in 1,000, 10 people in the sample have the disease. The test has perfect sensitivity, so all 10 of those people get a true positive result. But remember that the false positive rate is 5%, so 5 out of every 100 people tested will get a false positive. In total, 500 people out of 10,000 tested will get a false positive. So the chance of a patient who tests positive actually having the disease is 10/510, or 1.96%. Only a quarter of the physicians and physicians-in-training who were given this quiz answered it correctly.

Obviously, that isn’t because they aren’t extremely smart. It’s just that this kind of thinking — the idea that the diagnostic value of the test to any individual is predicated on the base rate of the disease in the tested population — is deeply counterintuitive. That’s exactly why I think the concept should be very explicitly hammered home in medical education, (or, better yet, all education), given that healthcare providers use diagnostic tests constantly.

Now let’s say a coronavirus antibody test has only a 1% false positive rate (or 99% “specificity,” in medical lingo). If 5% of the tested population was infected, a positive antibody test means you probably had the disease. And yet, there’s still a 17% chance that you got a false positive. Do that test on 50 million people, and 8.5 million people will wrongly assume they were infected. If we’re counting on antibody tests to tell people if they have some level of immunity, that could be a problem.

At some level, this actually can become intuitive. For instance, pretend that only 1 in a billion people has Hogwarts disease; so just 7 people on Earth have Hogwarts.** (Hog warts?) If your doctor told you that you tested positive for Hogwarts, and that the diagnostic test had only a 1% false positive rate, would you assume you have Hogwarts? Probably not. After all, if we tested everyone on Earth, 70,000,007 people would get positives, and only 7 would be true.

**If you test positive for Hogwarts, it’s definitely a false positive. Hogwarts is not a real disease.

 

LIGHTNING ROUND

  • A corollary to the issue above is that it’s important to test a representative sample of the population in order to establish the prevalence of infection. On this page, the FDA estimates the predictive value of antibody tests that it has authorized, but adds in bold letters: “We do not currently know the prevalence of SARS-CoV-2 antibody positive individuals in the U.S. population.” Therefore, the FDA notes, an individual should be very cautious about making decisions based on the results of a single antibody test.
  • If you want to think about conditional probability in maybe the simplest possible example, check out this short video. The question: Given that a couple has two kids, and you know that one is a boy, what’s the probability that the couple has two boys? (Hint: it’s not 50%.) The second parts asks the same question, except now you know specifically that the older child is a boy. Does it seem like that knowledge should matter? Well, it does.
  • For a few more conditional probability examples (including a tidbit of insight into search engines) check out the “Bayes’ Theorem” page of Math Is Fun.
  • For a deeper dive, check out chapter 3 of Judea Pearl’s Book of Why. It’s written for a wide audience, and includes only basic math. (The book is fascinating. It has some useful diagrams, so I don’t recommend the audio version.) Even if you gloss over the math, you’ll still come away with important concepts. The book includes the medical-test problem (via mammograms), and chapter 3 delves into the question: “How much evidence would it take to convince us that something we consider improbable has actually happened?”

Thanks so much for reading. Until next time…

David

p.s. You can find a link to this Range Report — or any previous report — here. And if you have a friend who might enjoy this newsletter, please consider sharing! They can subscribe here.

Start Where You Are

The power of de-emphasizing long-term goals in favor of short-term progress

Afghanistan Doomscrolling and the Reality of Deterrence

A late night internet rabbit hole gave me a different perspective on the news

Getting Over Gold: Athletes and Mental Health

Certain mental tools that help athletes in the short-term can be harmful in the long run

Range Widely Newsletter

Creative minds have broad interests. Expand your range with the newsletter for generalists.


Now live on Substack

Learn more and subscribe