You can get reinfected with Covid-19 but may still have immunity. Let’s explain.

Researchers at the University of Nevada have reported that a 25-year-old man was reinfected in June with SARS-CoV-2, the virus the causes Covid-19. He joins a handful of other confirmed cases of reinfection in people without immune disorders — in Belgium, the Netherlands, Hong Kong, and Ecuador — where researchers have demonstrated that the genetic signature of the second infection did not match that of the first.

According to a new study on the Nevada case, published in The Lancet Infectious Diseases journal, the patient first tested positive in April, and then tested negative for the virus twice. In June, 48 days later, “the patient was hospitalized and tested positive for a second time,” according to the authors, and he experienced severe symptoms. There were major genetic differences between the two infections, suggesting that the patient got the virus twice. (Since then, the patient has recovered.)

The report is in line with what immunity experts have been telling us is possible with this virus: that reinfection is possible and, to some extent, expected, with a coronavirus. But it also shows us how much we still have to learn: about how much protection a single infection can confer, about what exactly a robust long-duration immune response looks like, and about what determines the severity of disease in a second infection.

“Does immunity protect an individual from disease on reinfection?” writes Yale immunobiology researcher Akiko Iwasaki in an accompanying editorial in The Lancet Infectious Diseases. “The answer is not necessarily, because patients from Nevada and Ecuador had worse disease outcomes at reinfection than at first infection.”

The Nevada case is an important finding, since in the two other confirmed cases of reinfection, the patients had mild disease or were asymptomatic. Scientists still don’t know how common reinfection is (it may well be very rare), nor can they determine an individual’s chances of getting infected again.

They do know there are many, many components of our immune system that work together to fight the coronavirus, and immunity doesn’t mean one single thing. And while we’re waiting for scientists to figure it all out, everyone, including those who’ve already had the virus, should still try to avoid getting infected at all.

The new study “strongly suggests that individuals who have tested positive for SARS-CoV-2 should continue to take serious precautions when it comes to the virus, including social distancing, wearing face masks, and handwashing,” said Mark Pandori, of the Nevada State Public Health Laboratory at the University of Nevada Reno School of Medicine and lead author of the study, in a statement.

Let’s walk through the basics of immunity, and what we’re learning about reinfection.

There are no simple stories about immunity and Covid-19

The immune system is profoundly complicated, and “immunity” can mean many different things. A lot of this nuance gets lost in headlines about immunity.

For instance: Previous research has shown that neutralizing antibodies — immune system proteins that latch onto pathogens and prevent them from infecting cells — can wane in the months after a Covid-19 infection, particularly when the initial infection was mild. Some wondered if that meant the end of herd immunity hopes.

In the Nevada case, we know that “the patient had positive antibodies after the reinfection, but whether he had pre-existing antibody after the first infection is unknown,” writes Iwasaki.

But what’s often misunderstood is that antibodies are only one component of the immune system, and losing them does not leave a person completely vulnerable to the virus.

In fact, there are several parts of the immune system that may contribute to lasting protection against SARS-CoV-2.

One is killer T-cells. “Their names give you a good hint what they do,” Alessandro Sette, an immunologist at the La Jolla Institute for Immunology, told me in July. “They see and destroy and kill infected cells.”

Antibodies, he explained, can clear virus from bodily fluids. “But if the virus gets inside the cell, then it becomes invisible to the antibody.” That’s where killer T-cells come in: They find and destroy these hidden viruses.

While antibodies can prevent an infection, killer T-cells deal with an infection that’s already underway. So they play a huge role in long-term immunity, stopping infections before they have time to get a person very sick.

And it’s not just killer T-cells and antibodies. There are also helper T-cells, which facilitate a robust antibody cell response. “They are required for the antibody response to mature,” Sette says.

Some proportion of the population (perhaps 25 to 50 percent of people) seems to have some preexisting T-cells (of both varieties, but the helper kind have been more commonly observed) that respond to SARS-CoV-2, despite these people never having been exposed to SARS-CoV-2. The hypothesis is that these people may have acquired these T-cells from being infected with other strains in the coronavirus family of viruses. Researchers still don’t really understand what role these preexisting T-cells play in preventing or attenuating infection (if any).

But wait, there’s more! There’s another group of cells called memory B-cells. B-cells are the immune system cells that create antibodies. Certain types of B-cells become memory B-cells. These save the instructions for producing a particular antibody, but they aren’t active. Instead, they hide out — in your spleen, in your lymph nodes, perhaps at the original site of your infection — waiting for a signal to start producing antibodies again.

All the things “immunity” can mean

All these different components of the immune system mean “immunity” isn’t just one thing.

Immunity could mean a strong antibody response, which prevents the virus from establishing itself in cells. But it could also mean a good killer T-cell response, which could potentially stop an infection very quickly: before you feel sick and before you start spreading the virus to others.

“In many infections, the virus does reproduce a little bit, but then the immune response stops this infection in its tracks,” Sette explains. Also possible: “You do get infected, you do get sick, but your immune system does enough of a job curbing the infection, so you don’t get as sick.”

Immunity might also result from an awakening of memory B-cells. If an individual has memory B-cells and is exposed to the virus again, “that infection will stimulate a much faster antibody response to the virus, which would, theoretically lead to faster clearance of the virus and potentially less severe infection,” Elitza Theel, the director of the infectious diseases serology laboratory at the Mayo Clinic, said in a July interview.

In general, scientists believe, the stronger the infection (and immune response) that occurs during an initial infection, the longer immunity will last.

Click Here:

So reinfection may still be possible, but it may not mean severe illness. When a virus invades a body, generally, the body remembers.

Could asymptomatic infections spread the virus? Unclear.

It’s still not known what the latest reinfection study means for how long the pandemic will last. If reinfections happen regularly (and we have no idea how common they might be), then it might take longer to achieve herd immunity without a vaccination (which is an un-ideal, and cynical, goal to begin with). How long immunity lasts, on average, and how common reinfection is are key unknown variables in figuring out how long the pandemic may last in the absence of an effective vaccine or treatment.

“Reinfection cases tell us that we cannot rely on immunity acquired by natural infection to confer herd immunity; not only is this strategy lethal for many but also it is not effective,” Iwasaki wrote in the editorial. “Herd immunity requires safe and effective vaccines and robust vaccination implementation.”

We also have much more to learn about how often reinfections lead to more clusters of cases. Recently, I asked Shane Crotty, an immunologist at the La Jolla Institute for Immunology, about this very scenario.

“Could there be an ‘immunity’ scenario,” I asked, “where, after having recovered from Covid, a person could get infected again but not feel sick at all, and also be able to spread it?”

“It is a good question, and the answer is that no one knows,” Crotty replied. “There are cases with other diseases where asymptomatic immune people can be infectious. There is definitely a lot to learn still about immunity to SARS-CoV-2.”

Science has been in a “replication crisis” for a decade. Have we learned anything?

Much ink has been spilled over the “replication crisis” in the last decade and a half, including here at Vox. Researchers have discovered, over and over, that lots of findings in fields like psychology, sociology, medicine, and economics don’t hold up when other researchers try to replicate them.

This conversation was fueled in part by John Ioannidis’s 2005 article “Why Most Published Research Findings Are False” and by the controversy around a 2011 paper that used then-standard statistical methods to find that people have precognition. But since then, many researchers have explored the replication crisis from different angles. Why are research findings so often unreliable? Is the problem just that we test for “statistical significance” — the likelihood that similarly strong results could have occurred by chance — in a nuance-free way? Is it that null results (that is, when a study finds no detectable effects) are ignored while positive ones make it into journals?

A recent write-up by Alvaro de Menard, a participant in the Defense Advanced Research Project’s Agency’s (DARPA) replication markets project (more on this below), makes the case for a more depressing view: The processes that lead to unreliable research findings are routine, well understood, predictable, and in principle pretty easy to avoid. And yet, he argues, we’re still not improving the quality and rigor of social science research.

While other researchers I spoke with pushed back on parts of Menard’s pessimistic take, they do agree on something: a decade of talking about the replication crisis hasn’t translated into a scientific process that’s much less vulnerable to it. Bad science is still frequently published, including in top journals — and that needs to change.

Most papers fail to replicate for totally predictable reasons

Let’s take a step back and explain what people mean when they refer to the “replication crisis” in scientific research.

When research papers are published, they describe their methodology, so other researchers can copy it (or vary it) and build on the original research. When another research team tries to conduct a study based on the original to see if they find the same result, that’s an attempted replication. (Often the focus is not just on doing the exact same thing, but approaching the same question with a larger sample and preregistered design.) If they find the same result, that’s a successful replication, and evidence that the original researchers were on to something. But when the attempted replication finds different or no results, that often suggests that the original research finding was spurious.

In an attempt to test just how rigorous scientific research is, some researchers have undertaken the task of replicating research that’s been published in a whole range of fields. And as more and more of those attempted replications have come back, the results have been striking — it is not uncommon to find that many, many published studies cannot be replicated.

One 2015 attempt to reproduce 100 psychology studies was able to replicate only 39 of them. A big international effort in 2018 to reproduce prominent studies found that 14 of the 28 replicated, and an attempt to replicate studies from top journals Nature and Science found that 13 of the 21 results looked at could be reproduced.

The replication crisis has led a few researchers to ask: Is there a way to guess if a paper will replicate? A growing body of research has found that guessing which papers will hold up and which won’t is often just a matter of looking at the same simple, straightforward factors.

A 2019 paper by Adam Altmejd, Anna Dreber, and others identifies some simple factors that are highly predictive: Did the study have a reasonable sample size? Did the researchers squeeze out a result barely below the significance threshold of p = 0.05? (A paper can often claim a “significant” result if this “p” threshold is met, and many use various statistical tricks to push their paper across that line.) Did the study find an effect across the whole study population, or an “interaction effect” (such as an effect only in a smaller segment of the population) that is much less likely to replicate?

Menard argues that the problem is not so complicated. “Predicting replication is easy,” he said. “There’s no need for a deep dive into the statistical methodology or a rigorous examination of the data, no need to scrutinize esoteric theories for subtle errors — these papers have obvious, surface-level problems.”

A 2018 study published in Nature had scientists place bets on which of a pool of social science studies would replicate. They found that the predictions by scientists in this betting market were highly accurate at estimating which papers would replicate.

“These results suggest something systematic about papers that fail to replicate,” study co-author Anna Dreber argued after the study was released.

Additional research has established that you don’t even need to poll experts in a field to guess which of its studies will hold up to scrutiny. A study published in August had participants read psychology papers and predict whether they would replicate. “Laypeople without a professional background in the social sciences are able to predict the replicability of social-science studies with above-chance accuracy,” the study concluded, “on the basis of nothing more than simple verbal study descriptions.”

The laypeople were not as accurate in their predictions as the scientists in the Nature study, but the fact they were still able to predict many failed replications suggests that many of them have flaws that even a layperson can notice.

Bad science can still be published in prestigious journals and be widely cited

Publication of a peer-reviewed paper is not the final step of the scientific process. After a paper is published, other research might cite it — spreading any misconceptions or errors in the original paper. But research has established that scientists have good instincts for whether a paper will replicate or not. So, do scientists avoid citing papers that are unlikely to replicate?

This striking chart from a 2020 study by Yang Yang, Wu Youyou, and Brian Uzzi at Northwestern University illustrates their finding that actually, there is no correlation at all between whether a study will replicate and how often it is cited. “Failed papers circulate through the literature as quickly as replicating papers,” they argue.

Looking at a sample of studies from 2009 to 2017 that have since been subject to attempted replications, the researchers find that studies have about the same number of citations regardless of whether they replicated.

If scientists are pretty good at predicting whether a paper replicates, how can it be the case that they are as likely to cite a bad paper as a good one? Menard theorizes that many scientists don’t thoroughly check — or even read — papers once published, expecting that if they’re peer-reviewed, they’re fine. Bad papers are published by a peer-review process that is not adequate to catch them — and once they’re published, they are not penalized for being bad papers.

The debate over whether we’re making any progress

Here at Vox, we’ve written about how the replication crisis can guide us to do better science. And yet blatantly shoddy work is still being published in peer-reviewed journals despite errors that a layperson can see.

In many cases, journals effectively aren’t held accountable for bad papers — many, like The Lancet, have retained their prestige even after a long string of embarrassing public incidents where they published research that turned out fraudulent or nonsensical. (The Lancet said recently that, after a study on Covid-19 and hydroxychloroquine this spring was retracted after questions were raised about the data source, the journal would change its data-sharing practices.)

Even outright frauds often take a very long time to be repudiated, with some universities and journals dragging their feet and declining to investigate widespread misconduct.

That’s discouraging and infuriating. It suggests that the replication crisis isn’t one specific methodological reevaluation, but a symptom of a scientific system that needs rethinking on many levels. We can’t just teach scientists how to write better papers. We also need to change the fact that those better papers aren’t cited more often than bad papers; that bad papers are almost never retracted even when their errors are visible to lay readers; and that there are no consequences for bad research.

In some ways, the culture of academia actively selects for bad research. Pressure to publish lots of papers favors those who can put them together quickly — and one way to be quick is to be willing to cut corners. “Over time, the most successful people will be those who can best exploit the system,” Paul Smaldino, a cognitive science professor at the University of California Merced, told my colleague Brian Resnick.

So we have a system whose incentives keep pushing bad research even as we understand more about what makes for good research.