Gina Boyd (gboyd@rand.org) is a Contracts and Grants Administrator at RAND Corporation in Pittsburgh, PA. This work was conducted as part of the Health Care Compliance Graduate Certification program at the University of Pittsburgh School of Law.
Scientists, doctors, scholars, and policy makers devote their efforts to healthcare research in hopes of finding solutions and improving lives. It’s no exaggeration to say that healthcare researchers want to make the world a better place. As hard as they work, though, and as much as they care, research—and the data and information generated by that research—must be used responsibly in all its phases, which is something that can get overlooked. Whether through curiosity, zeal, determination to keep pushing for an answer, or desperation to find a specific answer; through more mundane things like forgetfulness or ignorance; or through unchecked cruelty and inhumanity of those in power, boundaries can be overstepped, which can bring harm in any number of ways to the very people the research was purportedly designed to help. The integrity of the research[1] and the researchers themselves—as well as their institutions’ reputations—can be diminished or destroyed. This article will discuss both ethical issues and public interest issues concerning the appropriate use of human-subjects research and the data and knowledge it generates, and then look at ways a compliance office can operationalize it to ensure consistency and reliability.
Human subjects research
For the purposes of the federal government and for this article, research is defined as “a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge.”[2] The best way to begin looking at research, and the ways research affects lives, is to understand what human subjects research is, and then to build policies and operations around guarding its integrity and robustness.
A human subject is defined in the Code of Federal Regulations as “a living individual about whom an investigator conducting research obtains data through intervention or interaction with the individual or identifiable private information.”[3]
There’s more to research than coming up with a hypothesis and then working to prove or disprove it, and there’s even more than that to healthcare research. Healthcare research involves people, their lives and their health, and their futures. Participating as research subjects makes people vulnerable in more ways than one.
Although researchers are generally benevolent in their intentions, it’s useful to look at the power dynamic inherent in the research design structure and to remember that benevolence isn’t always the case. The Nuremberg trials held after World War II revealed the shocking truth of the horrors that were carried out under the auspices of “research.” Twenty-three German medical doctors turned themselves into war criminals charged with crimes against humanity for “performing medical experiments upon concentration camp inmates and other living human subjects, without their consent, in the course of which experiments the defendants committed the murders, brutalities, cruelties, tortures, atrocities, and other inhuman acts.”[4] The researchers followed their curiosity without showing any regard at all for the humanity of their subjects.
These crimes alerted the research and lay communities that the need for rules and governance was urgent and led to the development of the Nuremburg Code of 1947, which set some ground rules, such as requiring the express consent of the research participant and other basic human decencies. Even with that foundation in place, though, cruelty continued. One of the most infamous examples of cruelty cloaked in the professional-looking guise of “research” to take place in the United States was the Tuskegee Syphilis Study, which was already operational at the time of the Nuremburg trials. In fact, this study began before the rise of the Nazis and WWII. According to a timeline provided by the Centers of Disease Control and Prevention,[5] the syphilis study evolved out of an effort initiated by Booker T. Washington in 1895. Washington’s plan to improve black economic development became the Tuskegee Education Experiment, which continued after his death. Stewards of the experiment noticed beginning in 1926 that poor health, including syphilis, was inhibiting the social strides the black community was making.
Every intention here seems to have been good, and not merely driven by cold curiosity that disregarded the lives and feelings of the people involved. The research went off the rails, though, when a cure for syphilis was discovered, but the researchers decided—without the knowledge or consent of the study subjects whose health and safety were at risk—to deny treatment to the infected, even after the cure was discovered. This was in 1947, but no ethical concerns were raised for almost 20 years, and the study continued to run until 1972. Clearly, neither common sense nor the rules that were adopted after Nuremberg were enough to protect the subjects of the study or to encourage the researchers and decision-makers to extend empathy and decency to those in their care.
This revelation raised enough red flags and prodded enough consciences that the more thorough Belmont Report was released in 1979.[6] Assembled by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which was itself created as a part of the Congressional response to the Tuskegee debacle, the Belmont Report laid out basic ethical principles and led to the Common Rule.[7]
The Common Rule was released in 1991 and revised for 2018, and its existence codifies federal policy for the protection of human subjects. 45 C.F.R. § 46 comprises these regulations, which are applied to 20 federal agencies that sponsor research activities. The Common Rule applies specific rules and regulations over a broad swath of research activity involving human subjects, and more about it will follow.
Despite the overwhelming depth and reach of the Common Rule, research ethics and responsibility still require scrutiny. A 2017 study out of Stanford University, Deep Neural Networks are more Accurate Than Humans at Detecting Sexual Orientation from Facial Images, explored whether Artificial Intelligence (AI) could predict someone’s sexuality by analyzing their headshot.[8] The study was conceived with the idea of helping to prevent discrimination, but it is more than possible that data collected from the research could indeed be used to discriminate against LGBTQ people and to cause them outright harm. This study throws into relief the shortcomings of the Common Rule and shows how the newly enacted Revised Common Rule (2019) is necessary and yet still imperfect.
The 2019 revision itself points out that rules must be monitored and updated with consideration for changes in society and technology (e.g., the quantities of personal information recently available thanks to the internet and social media—information people may not even realize they’re sharing), because research and the data it generates can have an enormous effect over a vulnerable population. The Revised Common Rule recognizes that the definition of identifiable private information and the inception points of identifiable private information have evolved,[9] but it hasn’t quite adapted accordingly. The revision still allows that there’s no consent requirement for information that is publicly available (as was the case in the Stanford study, which gathered its information from online dating sites), but its very adaptability acknowledges that the need for evaluation and updating is paramount. Similarly, the absence of the need for informed consent in this case and others like it is unfortunate. It opens up the potential for research to foment social harm that can easily lead to physical harm. Had these people been aware of the study and been treated as human subjects, they could have decided whether or not to participate, and they could perhaps have learned or benefitted from the results.
Despite its shortcomings, thanks to the Common Rule, there are many safeguards in place for protecting human subjects, and the Office for Human Research Protections (OHRP) under the U.S. Department of Health and Human Services (HHS) is the generator, repository, and conduit for all of them. OHRP is the place to go for information on Institutional Review Boards (IRB) and Federalwide Assurances (FWAs).[10] Outside of OHRP, but still under HHS, the Healthcare Insurance Portability and Accountability Act (HIPAA) also helps to protect research subjects (as well as all people receiving healthcare).