Type A Blood and Covid: Danger! …Wait, Never Mind

Sign up for The Range Report newsletter

July 28, 2020

In early June, my family had a distressing dinner-table conversation about medical research that was making headlines.

The study, eventually published in the venerable (and retraction-prone) New England Journal of Medicine, found that type A blood was associated with more severe Covid-19. Specifically, it found that patients with type A blood had a 50 percent increased likelihood of needing oxygen or a ventilator. That’s not good; early research in this pandemic suggested that one-third to one-half of patients who end up on ventilators die. My entire family is type A, including me. (Blood, not personality. As a writer, my personality is naturally type O. …Oof, bad pun.)

Obviously, this was unwelcome news. And it was the second study that had found a bad association with type A blood, although no one knew exactly why. “That is haunting me, quite frankly,” is how the New York Times quoted a German molecular geneticist who co-authored the NEJM study. Scary stuff.

Doctors popped up in articles and on YouTube theorizing (usually cautiously) why people with type A blood might have a different immune response to the coronavirus. The explanations were what my friend Mike Joyner of the Mayo Clinic calls “bioplausible.” That is, they are entirely logical, but also probably wrong.

While I wasn’t excited to hear the results of the blood type study, thanks to lessons I learned while reporting my first book, The Sports Gene, I was very skeptical. My guess was that subsequent studies would either find a much smaller influence of blood type, or none at all.

In the two years I spent going through research on genetics and physiology, I came across a lot of studies that associated some physical trait with blood type. This, I learned, is how most of that body of research was created: a lab would be studying the genetic contribution to some physical characteristic, let’s say height, just for example; the lab collected blood from all subjects; as long as the researchers had blood, they decided they might as well get blood type data. Later on, when they analyzed all their data, they noticed a correlation between height and a certain blood type, and so they published it. It wasn’t the study they set out to do, but it’s an easy way to get another publication. Fine, nothing wrong with that in and of itself.

Except eventually I learned that a lot of labs were doing that because it’s so easy to do, and those that didn’t find an association just didn’t publish it. So all the positive findings got published, and few of the negative findings (i.e. those that found nothing) got published. This is what scientists know as “publication bias,” or, colloquially, “the file drawer problem,” so-called because studies that find no relationship end up stuffed in a file drawer, never to see the light of publication. In the topics I was probing for The Sports Gene, I saw this pattern several times: a study finds a strong association of some physical trait to blood type, then another study does too; then a few studies start to trickle in that show a much weaker association; then come the studies that show no association at all. Ultimately, the conclusion is that the early studies were false positives, and only scientists getting false positive results were initially publishing. (As psychologist Drew Bailey taught me, this “decline effect” — the gradual drop in a reported effect over time as more studies are published — is an area of study unto itself.)

The good thing is that science often worked the way it should, eventually correcting the record. It just took a while. Amid the breakneck pace of coronavirus research and news, that’s kind of a problem, even when “a while” is measured in weeks.

Six weeks after my dinner-table conversation, a new round of studies found that blood type has little or nothing to do with Covid-19 severity. Unfortunately and unsurprisingly, the new findings received less attention. (But props to the New York Times for following up its initial story. In my opinion, when this happens, the follow-up article should be linked at the top of the original story, so that anyone who sees the first piece also sees the corrective.)

Here’s the moral of the story for this moment in time: tons of data on Covid-19 and patient characteristics is piling up all over the world, and scientists will be looking for and sharing all sorts of correlations. Many of those will be false positives, the result of statistical randomness. If the correlations are dramatic, they’ll grab headlines. Other researchers will (hopefully) subsequently try to replicate those findings, and often fail. The negative results will be less likely to get published. When they are published, they’ll be less likely to garner expansive news coverage.

My advice: if a particular Covid finding — say, the supposed curative effect of hydroxychloroquine — grabs your attention, first treat it like a hypothesis, not a rock-solid conclusion. To use a phrase from chapter 11 of Range, treat it like a “hunch held lightly.” Then set up a news alert so you have a better chance of noticing if the original study is contradicted. And keep in mind that the initial positive results are likely to be the most dramatic that are ever found, which is why they were published in the first place.

Finally, this lesson applies to all research, but I think it’s especially worrisome in drug trials. A recent examination of 105 clinical trials of certain antidepressants showed that 53 of the trials found the drugs to be effective, and 52 of the trials found them to be ineffective. But while 52 of the 53 positive trials were published, only 25 of the negative trials were published. So the body of published research is badly distorted compared to the actual scientific findings. Even an extremely conscientious doctor — one who pores over that entire medical literature — may well conclude that the drugs are more effective than they really are.

BONUS IN-THE-WEEDS TIP: The Funnel Plot (You already read the main point, so feel free to skip!)

There’s a really neat visualization called a “funnel plot” that helps demonstrate the publication bias issue I just described. Below is a funnel plot from a study that analyzed other studies of whether probiotics prevent gastrointestinal disease.

Every dot in the chart represents a single study of probiotics and GI disease. On the x-axis is a measure of whether probiotics increase or decrease GI disease risk. A negative number means probiotics decrease risk, so that’s what you want. The y-axis is a measure of how reliable a study is (basically how large the study is); the higher up the y-axis a datapoint is, the more reliable that particular study. Ok, take a look:

This is called a funnel plot because when you plot each study as a separate data point, you see a sort of funnel shape, with studies all over the place at the bottom and clustering as they move up the y-axis.

What that means is that the smaller studies (those lower on the reliability scale, i.e. the y-axis) have widely varying results. Some of those small studies find that probiotics have a huge beneficial impact on GI disease, and others — to the right of 0 on the x-axis — find that probiotics actually increase the risk of GI disease. So what’s the deal?

Well, as we look up the y-axis, the more reliable studies cluster close to zero on the x-axis, or “no effect at all.” That is: the crappier studies are all over the map, because small studies are more likely to have extreme outcomes just by chance, and the larger, probably-more-accurate studies are bunched together near what is probably the real answer: nothing at all.

Actually, they’re bunched just to the left of zero, so the authors of this paper conclude that probiotics probably have a modest beneficial effect.

But even that’s not a sure thing. Notice that the lower right part of the funnel is conspicuously sparse. That’s a telltale signature of publication bias. Essentially, the scientists who are doing small studies are more likely to publish extreme results when they go in the titillating direction — in this case, when they find that probiotics protect against GI disease.

The story in the funnel plot above probably played out over years like blood type and Covid severity did over weeks: early on, researchers run small studies and only those who find provocative results publish them. Over time, as other scientists try to replicate the result with better study methods, it lessens or disappears entirely. My guess: if all the relevant probiotic studies were published, the apparent effect of probiotics would be even closer to zero than it already is.

Thanks so much for reading. Until next time….

David

p.s. The last Range Report, a remembrance of the so-called “father of the 10,000-hours rule,” evoked many more responses than I expected. In case you missed that one, here it is.

p.p.s. If you have a friend who might enjoy this free newsletter, please consider sharing. They can subscribe here.

Start Where You Are

The power of de-emphasizing long-term goals in favor of short-term progress

Afghanistan Doomscrolling and the Reality of Deterrence

A late night internet rabbit hole gave me a different perspective on the news

Getting Over Gold: Athletes and Mental Health

Certain mental tools that help athletes in the short-term can be harmful in the long run

Range Widely Newsletter

Creative minds have broad interests. Expand your range with the newsletter for generalists.


Now live on Substack

Learn more and subscribe