Summary
The effects of non-pharmaceutical interventions on the COVID-19 pandemic are very difficult to evaluate. In particular, most studies on the issue fail to adequately take into account the fact that people voluntarily change their behavior in response to changes in epidemic conditions, which can reduce transmission independently of non-pharmaceutical interventions and confound the effect of non-pharmaceutical interventions.
Chernozhukov et al. (2021) is unusually mindful of this problem and the authors tried to control for the effect of voluntary behavioral changes. They found that, even when you take that into account, non-pharmaceutical interventions led to a substantial reduction in cases and deaths during the first wave in the US.
However, their conclusions rest on dubious assumptions, and are very sensitive to reasonable changes in the specification of the model. When the same analysis is performed on a broad range of plausible specifications of the model, none of the effects are robust. This is true even for their headline result about the effect of mandating face masks for employees of public-facing businesses.
Another reason to regard even this result as dubious is that, when the same analysis is performed to evaluate the effect of mandating face masks for everyone and not just employees of public-facing businesses, the effect totally disappears and is even positive in many specifications. The authors collected data on this broader policy, so they could have performed this analysis in the paper, but they failed to do so despite speculating in the paper that mandating face masks for everyone could have a much larger effect than just mandating them for employees.
This suggests that something is wrong with the kind of model Chernozhukov et al. used to evaluate the effects of non-pharmaceutical interventions. In order to investigate this issue, I fit a much simpler version of this model on simulated data and find that, even in very favorable conditions, the model performs extremely poorly. I also show with placebo tests that it can easily find spurious effects. This is a problem not just for this particular study, but for any study that relies on that kind of model to study the effects of non-pharmaceutical interventions.
In the debate about lockdowns and other restrictions to slow down SARS-CoV-2’s spread, people on both sides like to cite studies that support their position. Indeed, whether you’re in favor of stringent restrictions or opposed to them, there is no shortage of studies that you can use to claim that your position is vindicated by science. To be sure, you mostly hear about pro-lockdown studies in the media, because journalists tend to be pro-lockdown, but there are plenty of studies whose conclusions vindicate the anti-lockdown side of the debate. As I explained in my case against lockdowns, most studies about the effect of non-pharmaceutical interventions fall roughly into 2 categories. First, you have studies that fit an epidemiological model, typically a compartmental model of some kind, on epidemic data. Non-pharmaceutical interventions are assumed by the model to affect transmission in a certain way and their effect is estimated by fitting the model on actual epidemic data. The other type of studies use econometric methods to estimate the association between non-pharmaceutical interventions and the growth rate of the epidemic or some related quantity such as R. They basically try to determine whether the epidemic grows more slowly when non-pharmaceutical interventions are in place.
The fundamental problem with the first kind of study is that, in order to use that approach, one has to make very strong mechanistic assumptions that at best one is not in a position to make and at worst are known to be false. Indeed, studies that fall in that category almost systematically assume that, until the herd immunity threshold is reached, only government interventions affect transmission, which is clearly false. If that were true, in the absence of government interventions, the epidemic would continue to grow until the herd immunity threshold is reached, at which point incidence would start to fall. But we have seen over and over that, even when R is much higher than 1 and the government doesn’t do anything more than it’s already doing, incidence peaks long before the herd immunity threshold is reached. There are several possible explanations for this phenomenon, which are usually not mutually exclusive, but my favorite is that, even in the absence of government interventions, people change their behavior in a way that reduces transmission when incidence starts blowing up. Of course, I’m not saying that people check incidence curves and adjust their behavior when they start climbing, but that people respond to various signals — the news is suddenly full of reports about hospitals filling up, people start hearing about friends and acquaintances who have been infected, etc. — that are correlated with rising incidence.
To read the rest, click here.