Institutional Review Boards (IRBs) are required by United States (US) law to review research proposals involving human subjects to ensure that they provide adequate protection to the subjects’ rights and welfare. In conducting this review, IRBs weigh the legal, scientific, and ethical aspects of a potential study to determine whether and how it should proceed, and they monitor the project until its completion. [1], [2], [3]
Since the establishment of IRBs in the mid-1970s, the processes and outcomes of IRB decision-making have been the subject of empirical research. Multiple commentators have reported evidence that the process is inefficient, expensive, and time-consuming, and that different IRBs often give different assessments of similar—or even identical—research proposals. The secrecy of the deliberations has also been criticized as preventing researchers and academics from evaluating the substance of the decisions.
Klitzman has suggested that the publication of IRBs' decisions may ameliorate this situation. [4] Publication would make the process immediately transparent and, over time, would establish a body of “common law” to assist IRBs in their deliberations, eventually achieving greater efficiency and consistency in decision-making. This essay evaluates the merits of this suggestion. It first describes the criticisms that have been made of IRBs, then sets out Klitzman’s proposal and several counter-arguments. It concludes that the publication of IRB decisions would be a valuable step in addressing the identified problems.
Criticisms of IRB decision-making
In 2011, Abbott and Grady conducted the first systematic review of the studies of IRB decision-making in the US. Dissatisfaction with the process was clearly foreshadowed, as the studies were motivated by the irritation of investigators and commentators, who described the system as “outdated,” “dysfunctional,” “overburdened,” and “overreaching.” And indeed, Abbott and Grady’s conclusions were overwhelmingly negative.
Variability in the process and outcomes was identified by many commentators as the most concerning feature of IRB decision-making. Abbott and Grady’s review found that IRBs differed in their interpretation of federal regulations, application of value judgments, and the time taken to review proposals. [1] These differences amounted to fundamental variances in determinations of the level of review required (i.e. full or expedited), the level of risk faced by participants, and how participants should be recruited and compensated. As well as these substantive contradictions, one IRB would assess a protocol within a week, whereas another would take over 30 weeks. Similar results were reported by Anderson and Dubois, and Silberman and Khan. [5], [6]
Pritchard (Pritchard 2011, 31–46 [7]) has explained this variability as reflecting differences in the interpretation of the regulatory provisions, reflects the wide drafting of federal laws that do not, for example, define social or individual “risks” and “benefits” or explain how to balance them. [7] Divergence is also seen at an individual level, with members perceiving risks differently depending on their education, experience, cultural norms, and individual fears. [5], [8] Linked to this is Anderson and Dubois’ finding that many IRB members admit to relying on their intuition rather than using objective analysis, seeking “peace of mind” as to the riskiness of the study, and using the “sniff test” to determine whether a protocol is ethically sound (see also Rosnow et al 1993, 821–826 [9]). [5]
The publication of IRB decisions
The IRB review process has been described as occurring in a “black box,” where researchers submit protocols and receive an arbitrary response at an arbitrary time. Only researchers whose proposals were rejected receive an explanation, which are not a matter of public record. Given this secrecy, the research described above focused on external features including written policies, IRB composition, workloads, timelines, and review outcomes. Many commentators consider that they cannot effectively evaluate IRB decisions without direct evidence as to the actual decision-making processes. [1], [5]
One way to reveal this process would be through the publication of IRB decisions. This was referred to in passing by Rosnow, who recommended that IRBs be provided with a “casebook” of research protocols; and by Mueller, who suggested that IRBs adopt the perspective of a court, or at least a literature reviewer, and consider the process followed by other IRBs. [9], [10]
In his 2015 book The Ethics Police, Klitzman took these suggestions a step further. Klitzman likened the IRB process to the court process, stating that the legal system “similarly confronts ambiguities and differences in interpretation, but avoids many pitfalls by being more transparent, and by seeking and drawing on documented precedents.” [4] He recommended the publication of decisions with the view to developing a body of “case law.” Klitzman considered that this body would confer an institutional memory to decentralized IRBs which would reduce “unnecessary idiosyncrasies” and duplication of work. It would also formalize the existing system of communication between IRBs and make these discussions available to those who were excluded or did not want to directly participate. Practically, Klitzman suggests that this publication may be managed by various external organizations, such as the Office for Human Research Protections.
Arguments against the publication of IRB decisions
This proposal raises three immediate concerns, each of which may be quickly addressed. First, it may be argued that the publication of decisions would add to the administrative burden of IRBs. This cannot be maintained in practice, however, as IRBs are already required to keep records of minutes, decisions, progress reports, correspondence, new findings, consent forms, and membership. [2], [3] The only additional record that would be required would be the production of a decision. If the publication of that decision could be externally managed, publication would not add to an IRBs’ workload.
Secondly, there is a risk that publication may remove the flexibility of the decision-making. In response, it may be noted that the current degree of flexibility may not be desirable, as it has resulted in the irrational variation described above. Further, there is no evidence that IRBs actually “import ‘community values’” that reflect the local legal and ethical climate, as their decisions vary widely within the same locality and instead display intra-IRB biases. In any event, the use of precedent would not freeze the system, as any defensible departure may be explained in the decision. [4]
Third, there is a risk that the publication of IRB decisions could reveal sensitive information. Klitzman addressed this simply by saying that such information could be redacted. [4] It is also possible to publish the decision after the completion of the study, or to publish only successful applications (though the publication of only successful applications may undermine the efforts toward transparency).
Alternatives
The most dramatic alternative proposal is the centralized review of all studies, or that IRBs be centrally accredited and audited. [1], [9] This would almost certainly decrease variability and increase accountability. However, it would also render the process rigidly bureaucratic and unable to meet changing circumstances; and risk entrenching the perspectives of those employed in the central IRB or organization. [4], [11] As such, it is possible that a centralized system would be more vulnerable to external influence than individual IRBs. Such a structure would also make it difficult for researchers to communicate directly with IRBs, which may increase the length of time taken for review.
Others have suggested the centralization of IRB oversight of multi-center studies. [1], [7] While this would, of course, reduce the variability within these studies, it would not affect the variability between different multi-center studies or individual studies. In this way, it would simply mask the real issue, which is that IRBs are making decisions differently—a concern that was borne out by the analysis of multi-center studies.
Commentators have also suggested targeted education or the adoption of evidence-based reasoning. [5], [7], [9] While this would be a welcome addition, it would require an investment of resources, is likely to be applied unevenly across IRBs, and would not guarantee consistency of application. Further, the authors do not state how IRB members would access or identify the “best” evidence, nor do they demonstrate that this approach would itself ensure consistent determinations. However, the substance of this suggestion could be maintained without these costs if it is seen as an augmentation of Klitzman’s suggestion, e.g. in making such materials available as templates within the database (see also Pritchard 201l, 31–46 [7]). [4]
Finally, legal scholars have suggested the revision of the existing regulations to clarify the concepts that have caused confusion. [2], [12] However, it seems that this would frustrate the driving purpose of those provisions. It is impossible to prospectively define such concepts as “risk” and “benefit” in a manner that would suit all conceivable situations. Indeed, if this were possible, IRBs would be of little use. It is likely that the regulations were drafted in a deliberately vague way to allow them to be tailored to suit a diverse range of factual scenarios and evolve alongside scientific practices. Therefore, it is both practical and in line with the apparent legislative intent to identify and define these concepts with respect to particular cases, rather than in the abstract in advance.
Decision publication and the role of the IRB
This leaves the question of whether the arguments in favor of publication are strong enough to prompt a change to the current position. This analysis may be informed by the importance of the IRBs’ role. IRBs were established as independent entities to protect participants from unethical research practices. They often perform these duties on behalf of vulnerable individuals or populations, including children, terminally ill patients, and indigent persons. This role is considered sufficiently important that IRBs’ existence and composition is federally mandated. If IRBs do not fulfill their protective role, it is a matter of significant public concern.
Current empirical evidence suggests that IRB decision-making is inefficient and variable. At worst, this means that an error has allowed unacceptable research to be conducted (risking harm to the research subjects) or barred acceptable research (to the detriment of scientific progress). At best, erroneous or unnecessary modifications may delay or stall research progress, or simply complicate the procedure. This has been used to cogently argue that there is no evidence that IRBs are fulfilling their aim of protecting the public or, indeed, performing any worthwhile function; and that they are simply “self-serving” bureaucratic hoops through which researchers must jump. [10] These are matters of serious concern.
The publication of IRB decisions is a practical means by which to ensure the fulfillment of their protective prerogative. This would immediately provide commentators with the data required to investigate IRB decision-making and give IRB members with practical insights as to the interplay of legal and ethical requirements. In the mid-term, it could establish templates for common issues, such as the language of consent forms, to increase consistency and decrease the burden on individual IRBs. Long-term, it could compile a body of cases that would define broad concepts such as “minimal risk” by reference to factual scenarios. At each level, this would minimize the variability observed and bring a level of accountability to IRBs in requiring them to justify their actions. In addition, this may improve the efficiency of the entire procedure, for example, if researchers use the templates to draft their proposals so that they require fewer modifications. On a practical note, while funding would be immediately required, once established, such a database would be relatively inexpensive to maintain. If successful, its costs may even be offset by the savings made by streamlined decision-making.
Overall, the publication of IRB decisions offers direct benefits that are desperately needed in their decision-making, at a relatively small cost. As such, the arguments in favor of publication outweigh the presumption in favor of the status quo.
Sarah Murphy, LL.B. (Hons), MBE '17 can be reached at sarah (at) kfw.co.nz.
[1] Abbott, Lura and Christine Grady. "A Systematic Review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn." Journal of Empirical Research on Human Research Ethics 6, no. 1 (2011): 3-19. https://doi.org/10.1525/jer.2011.6.1.3
[2] Stoddard, Daniel G. "Falling Short of Fundamental Fairness: Why Institutional Review Board Regulations Fail to Provide Procedural Due Fairness." Creighton Law Review 43, (2010): 1275-1327. http://hdl.handle.net/10504/40692
[3] "Protection of Human Subjects" Code of Federal Regulations, title 45 (2009).
[4] Klitzman, Robert. The Ethics Police? The Struggle to Make Human Research Safe. (New York, NY: Oxford University Press. 2015): Chapter 6.
[5] Anderson, Emily E. and James M. DuBois. "IRB Decision- Making with Imperfect Knowledge: A Framework for Evidence-Based Research Ethics Review." Journal of Law, Medicine and Ethics 40, no. 4 (2012): 951-969. https:doi.org/10.1111/j.1748-720X.2012.00724.x
[6] Kahn, Katherine L. and George Silberman."Burdens on Research Imposed by Institutional Review Boards: The State of the Evidence and Its Implications for Regulatory Reform." The Milbank Quarterly no. 4 (2011): 599-627. https://doi.org/10.1111/j.1468-0009.2011.00644.x
[7] Pritchard, Ivor A. "How Do IRB Members Make Decisions? A Review." Journal of Empirical Research on Human Research Ethics 6, no. 2 (2011): 31-46. https://doi.org/10.1525/jer.2011.6.2.31
[8] Klitzman, Robert. "How IRBs View and Make Decisions About Social Risks?" Journal of Empirical Research on Human Research Ethics 8, no. 3 (2013): 58-65. https://doi.org/10.1525%2Fjer.2013.8.3.58
[9] Rosnow, Ralph L., Mary Jane Rotheram-Borus, Stephen J. Ceci, Peter D. Blanck, and Gerald P. Koocher. "The Institutional Review Board as a Mirror of Scientific and Ethical Standards." American Psychologist 48, no. 7 (1993): 821-826. https://doi.org/10.1037//0003-066x.48.7.821
[10] Mueller, John. "Ignorance is Neither Bliss Nor Ethical." Northwestern University Law Review 101, no. 2 (2007): 809-836.
[11] Sage, William M. "Lords of the Jumble: IRBs, Ethics, and the Common Law of the Common Rule." Health Affairs 35, no. 5 (2016): 934-935. https://doi.org/10.1377/hlthaff.2016.0397
[12] Hoffman, Sharona and Jessica Wilen Berg. "The Suitability of IRB Liability." University of Pittsburg Law Review 67, no. 2 (2005): 365-427. https://doi.org/10.5195/lawreview.2005.65