It’s Time to Review the Institutional Review Boards
IRB reform is desperately needed and likely to succeed
In the CSPI Essay Contest: Policy Reform for Progress, this essay won an honorable mention.
Institutional Review Boards (IRBs) are ethics committees, ideally composed of scientific peers and lay community members, that review research before it can be conducted. Their ostensible purpose is to protect research subjects from research harms. But oftentimes, IRBs are costly, slow, and do more harm than good. They censor controversial research, invent harms where none exist, and by designating certain categories of subjects as “vulnerable,” cause a corresponding diminishment in research on those subjects. There is even a plausible legal argument that they violate researchers’ First Amendment rights. Because previous attempts to spur the responsible federal executive agencies into streamlining IRBs have been unsuccessful or only had limited success, a targeted legislative solution that does not depend on bureaucratic implementation is needed.
How Did We Get Here?
The IRB system is a useful case study on how well-intentioned government oversight can, over time, develop into a pseudo-morality, complete with a secret decision process, a designated priestly class of “IRB administrators” who study a holy text (the Belmont Report),1 and laypeople invoking heresy.
Starting in the 1940s, various professional organizations developed codes of research ethics, though none had the force of law or had reached a consensus. Precursors to IRBs have existed in large scientific institutions like the NIH since the early 20th century but emerged on a large scale in the 1960s and 70s, with the Public Health Service instituting an overview process of medical and psychological research grants in 1966.
In response to highly publicized biomedical research scandals, most notably the Tuskegee Experiment, Congress passed the National Research Act of 1974. This created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which published the Belmont Report in 1976. As historian Zachary Schrag has amply documented in Ethical Imperialism, the commission was sorely lacking in social science expertise from the beginning. This was logical, since the most egregious research scandals, like the ones documented in this landmark 1966 Beecher article, were the work of biomedical researchers.2
The federal government initially shied away from heavy-handed oversight of the social sciences, who had a powerful champion for academic freedom in Ithiel de Sola Pool. However, a gradual scope-creep, spearheaded by successive leadership of the Office for Human Research Protections (OHRP) in the Department of Health and Human Services, ensured that by the early 1990s practically all social science research involving human subjects had to undergo IRB review.
Questionable Legality
This “ethical imperialism” was accomplished through murky legal means. First, the federal government required institutions that accepted any research funding to certify that all research (including non-government funded research) would undergo IRB review. Second, to bypass the mandatory public comment period that accompanies administrative rulemaking, OHRP issued “suggested” guidance that was practically never disobeyed. Third, when explicitly instructed to reduce research regulations by the incoming Reagan administration, OHRP issued stricter rules instead.
Accompanying the top-down regime was a growing class of IRB professionals, a far cry from the academic peers that were originally envisioned to review research proposals. This class of compliance professionals has constituted itself in trade associations like the PRIM&R and AAHRPP, and now serve a quasi-private role in interpreting the vague pronouncements of OHRP for their captive university and industry consumers.
Questionable Necessity
However unglamorous the origin of IRBs, the more damning fact is that IRBs are, mostly, a ham-fisted “solution” to a trumped-up problem. As Schneider argues at length in The Censor’s Hand:
[Being a subject] is not particularly hazardous...surveys both before and after the rise of the IRB system found few examples of serious risk3...people and institutions with incentives to discover and publicize risk locate little...studies repeatedly find that patients are not hurt and might be helped by being research subjects.
In the social sciences, the basis for IRB review is even weaker. Per Schrag’s Ethical Imperialism, Congress never intended to regulate social science. In fact, the studies cited as justification for research oversight in the Belmont Report are biomedical research. Decades later, in an interview with historian Zachary Schrag, two members (Jonsen and Beauchamp) of the original commission that wrote the report effectively admitted that the regulation of social science research by the same methods as biomedical research was a mistake.
As justification for their continued existence, IRBs have cited increasingly non-physical “harms” to subjects with little empirical support. For example, IRBs sometimes view speaking with trauma survivors about their trauma as a presumptively harmful act. This is likely incorrect, and avoiding those topics only delays squarely addressing them. A more concerning systemic problem with IRBs is their role as institutional censors. Some IRBs have explicitly stated that certain subjects, because of their controversy, face stricter scrutiny. IRBs also fear a media outcry, and limit local researchers as a result. Over several decades of social science research, it is not clear if any subject deaths have ever occurred as a result.
On the other hand, conservative estimates of deaths that IRBs have caused through delays to research trials are in the several thousands, as in the delay of the landmark ISIS-2 trial.4 More speculatively: if the UK COVID-19 human challenge trials (HCTs) that eventually took place in March 2021 had instead taken place in the summer of 2020, they might have meaningfully accelerated vaccine approval, saving countless lives. HCTs are the deliberate infection of low-risk volunteers with a pathogen, and have produced numerous important insights in microbiology since the 18th century. Experts from March 2020 onward made a clear case that HCTs could save thousands of lives by accelerating vaccine availability by a few months. Unfortunately, IRBs have looked askance at HCTs for conditions that lack effective treatment.
In the US, a COVID-19 challenge protocol was rejected, even though 35 members of Congress signed an open letter asking HHS and the NIH to explicitly consider human challenge trials, and there was broad public support for challenge trials. In the UK, an HCT was eventually successfully conducted in February 2021, more than a year after COVID-19 emerged, and the delay was due in large part to the ethics review process.5
A hidden cost is the biomedical research that is never carried out as a result of IRB deterrence. Sometimes this may be riskier but higher-reward research; in other cases, IRBs may deter less well-funded researchers who don’t have the time to deal with multiple rounds of IRB approval. Early-career scientists with a tenure clock ticking away in the background also suffer disproportionately from IRB delays, as do students.
Worst of all, the IRB system shows no signs of recognizing its limits. IRBs have even tried to regulate high-school science fair projects. Consent forms grow ever more byzantine, a far cry from the index card consent of the mumps vaccine trials, as Dr. Paul Offit documents in Vaccinated:
Parents interested in participating in the study received a three-by-five-inch card stating, “I allow my child to get a mumps vaccine.” At the bottom of the card was a line for the parents’ signatures. Unlike the practice today, the consent card didn’t contain an explanation of the disease, a description of the vaccine, a list of vaccine components, a discussion of previous studies, the need for blood tests, or a statement of possible risks and benefits. Parents were also given Robert Weibel’s work and home telephone numbers. If they had any questions about the vaccine or if they were worried that the vaccine was causing a problem, they could call him at any time, day or night. Weibel, in turn, would drive to their homes and examine their children.
Quality improvement research in hospitals is coming increasingly under IRB scrutiny as well, imperiling the ability of healthcare systems to quickly learn and iterate from internal data.
The IRB system has occasionally retreated in the face of sustained criticism, as when Ithiel de Sola Pool argued several decades ago in The New York Times that they suppress academic freedom. They largely left social scientists alone in the 1980s as a result. And in 2011, more than a dozen federal agencies initiated a revision of the official Common Rule guidance to address the many grievances scientists had voiced over the years. But overall, these moments of reform have been a disappointment. The only substantive reforms that emerged out of the 8-year Common Rule revision process were the following: a determination that multi-site trials should be centralized in single IRBs; that researchers did not have to ask IRBs if research was exempt; and an explicit recognition that oral history was exempt from IRB review. Per personal communication with involved parties, these revisions were a resounding disappointment to the busy researchers who spent precious time submitting public comments to little effect.
Solutions
Given the clear costs of our current IRB system and the failures of self-regulation, what is to be done? One tempting solution is to get rid of the IRB system entirely. Multiple eminent legal scholars have argued along these lines. In The Censor’s Hand, Carl Schneider argues that given previous failures of IRB reform, only wholesale abandonment of it can work. Columbia law professor Philip Hamburger has argued that IRBs are a serious threat to free speech since they’re effectively a licensing system for speech and thus a target for First Amendment offensive litigation.
Judicial lawfare on the IRB system is a tempting option and should be pursued in parallel, but my sense is that non-government forces that demand IRB oversight will serve as a backstop in that scenario. Scientific journals, for instance, usually demand that authors seek IRB approval. Universities might still demand IRB oversight as a condition of employment. Perhaps public university IRBs may end up substantially curtailed through such a strategy, but that is uncertain. So something like an IRB system will probably remain in place as long as large institutions are risk-averse.
Instead, I will lay out a series of incremental IRB reforms, all of which could be implemented through direct and unambiguous Congressional legislation, since federal bureaucrats have previously demonstrated a willingness to ignore presidential and public demands. Current federal IRB regulations were developed by federal agencies, who were delegated this authority by Congress. Congress can exercise this original authority to more precisely target regulation.
Some Reforms
The following are reforms that maintain IRBs in some form but fix their biggest problems. Ideally all of these reforms would be implemented, but each would be useful on its own.
As professor Ryan Briggs has proposed, researchers who make small changes in a study protocol should be able to self-certify that their changes meet a de minimis standard, avoiding another round of IRB review and revision. Some IRBs only meet every few weeks or months, so an extra round of IRB review for small changes in a protocol means substantial delay, slowing scientific progress. If researchers abused this privilege and tried to smuggle in substantive changes to their protocol, they would forfeit this ability.
A similarly narrow reform is implementing an electronic checklist that would allow researchers to self-determine if their research was low-risk and did not require IRB review. A University of Chicago professor, Omri Ben-Shahar, has developed exactly such a tool, and OHRP has no objection, but clear federal guidance would assuage the worries of risk-averse university administrators, who often still require IRBs to approve exempted studies. If universities continued to delay the use of such a tool, Congress could make receipt of government funds conditional on developing and allowing such a tool.
Holly Fernandez-Lynch, a Professor at the University of Pennsylvania, argues that greater IRB transparency is sorely needed. In their current incarnation, IRB decisions are opaque to researchers and even other IRBs. In contrast to our legal system, which is built on precedent, every IRB decision is effectively made de-novo, which results in high heterogeneity between IRBs. Transparency would help every member of the research ecosystem: researchers would better understand which protocols would need modification, and IRBs would learn from each other’s best practices. Confidentiality would be reserved for commercially sensitive protocol sections and kept to a minimum.
Any attempt at IRB reform should also focus on what is likely the largest recent cost imposed by IRBs—the lack of timely human challenge trials (HCTs) for COVID-19. To prevent such delays in the future, Congress ought to lay out specific timelines for the ethics review process of HCTs in pandemic situations and grant substantial legal protection to involved investigators and institutions. The FDA already has the legal authority to approve vaccines based on challenge data – the missing link is a clear signal to IRBs that obstruction and delay are not acceptable in pandemic situations. While Congress can directly write this exemption into law, an alternative for more cautious legislators is granting an independent commission, like the original National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research itself, the authority and mandate to develop fast and ethical guidelines for HCTs in pandemic situations.
If Congress has greater appetite for IRB reform, a more sweeping approach would be removing social science from IRB jurisdiction altogether. Historian Zachary Schrag, who worked intensely to lobby federal agencies on the Common Rule revisions from 2009-2017, proposes this in his book, Ethical Imperialism. As he documents, the Belmont Report, and subsequent regulatory developments, were not designed with social science in mind. Congress could fix this historical oversight by changing the wording of HHS regulations. This would free IRBs to focus on truly high-risk research instead of wasting time on low-risk social science research. Since social science more often touches on political questions, this would also extricate government-mandated oversight boards (IRBs) from the delicate position of regulating politically charged research.
After many decades of stifling scientific progress, it is time for targeted but substantive IRB reform. Such an effort could recruit a bipartisan and pro-progress coalition. Leading center-left journalists Ezra Klein and Matthew Yglesias have both criticized IRB failures, while influential libertarians like Alex Tabarrok and Tyler Cowen are natural allies in streamlining unnecessary government red tape. Common-sense IRB reform is desperately needed and likely to succeed.
Willy Chertman is a graduate of the University of Miami Miller School of Medicine and PGY-1 internal medicine physician at Holy Cross Health Hospital Fort Lauderdale. He writes about medical innovation, regulation, and the history of science. Subscribe to his Substack and follow him on Twitter @Willyintheworld.
Read the other prize-winning essays from the CSPI Essay Contest:
“Gathering Steam: Unlocking Geothermal Potential in the United States” by Andrew Kenneson
“Mo’ Money Mo’ Problems” by Maxwell Tabarrok
“Drone Airspace: A New Global Asset Class” by Brent Skorup
“The University-Government Complex” by William L. Krayer
I am indebted to Carl Schneider for this metaphor, who called the Belmont Report the “system’s sacred texts.”
In this context, “biomedical research” refers to medical research and experimental psychology research, while social science includes political science, sociology, anthropology, and economics.
Such as a Cochrane systematic review of 85 studies, which found no evidence that research participation was associated with worse outcomes.
I am indebted to Carl Schneider's The Censor's Hand for this fact, which was first calculated in Chapter 4 of Introducing New Treatments for Cancer: Practical, Ethical and Legal Problems by Rory Collins, Richard Doll, and Richard Peto, edited by C. J. Williams.
Personal communication with involved parties.
"[Being a subject] is not particularly hazardous...surveys both before and after the rise of the IRB system found few examples of serious risk3...people and institutions with incentives to discover and publicize risk locate little...studies repeatedly find that patients are not hurt and might be helped by being research subjects."
Taken in aggregate, yes, most IRB-reviewed biomedical studies don't pose significant risk to patients. However, RCTs of drugs--which the Cochrane Review does not actually include many of, despite their claim!--*especially* phase 1 trials, *especially* phase 1 trials where results from animal studies are concealed from trial enrollees, pose a significant risk both of harm and coercion of patients. Consider Jesse Gelsinger's death, or the excess deaths from the Vioxx clinical trials that were buried to permit the drug to precede to market. (Or consider every other drug that's gotten a black-box warning after "thorough" safety testing.)
However, since these all happened under the purview of IRBs, it's hard to say the system's working to prevent these harms. Nevertheless, it would be better to acknowledge they exist and we aren't preventing them, versus pretending--as trial funders do--that they don't.