The HHS Secretary’s Advisory Committee on Human Research Protections (SACHRP) recently approved recommendations on the ethical and regulatory considerations for the use of artificial intelligence (AI) in human subjects research after removing some language discussing the potential harms of AI.[1]
The advisory committee originally had okayed the document “IRB [institutional review board] Considerations on the Use of Artificial Intelligence in Human Subjects Research” at its meeting July 2. But the SACHRP Subcommittee on Harmonization, which crafted the document, continued work on it to address particular concerns expressed by the full SACHRP when the document was initially presented.
At the Oct. 19 SACHRP meeting, harmonization subcommittee member Stephen Rosenfeld, noted that the document, as originally approved, had a lengthy preamble with language that “was philosophic and talked about the harms” of AI in medical research. This is in addition to the “very concrete responses” to the questions that the HHS Office of Human Research Protections (OHRP) asked the panel to consider.
“After a long discussion [among subcommittee members], we just decided that there was enough in the preamble that was kind of speculative and reflected opinions from the subcommittee,” Rosenfeld told SACHRP members. “And the board, I think, while generally agreeing with our conclusions, wasn’t entirely comfortable.”
Therefore, Rosenfeld said, the subcommittee removed the preamble. Instead, he said, it inserted language stating, “Given the rapidly evolving nature of AI and machine learning (ML), and the imprecise definitions and varied understanding of those terms, the committee did not reach consensus on a concise background framing that it felt would be authoritative. On the other hand, given that same evolution and ubiquity of the use of these technologies, the committee felt it important to respond to the charge without undue delay.”
As part of its revisions, the subcommittee also provided “better examples” to explain the first two questions in the document, which touch upon data collection for AI and the definition of “human subjects” related to AI-powered research, Rosenfeld said.
In its original charge, SACHRP was asked to provide answers to 10 specific questions involving the use of AI and ML in human research, touching on when such research would be covered by or exempt from the Common Rule, changes needed in consent, and how IRBs should consider the potential for bias or flaws in research that includes AI.
More Consideration Urged
SACHRP adopted three recommendations for the HHS secretary.
In the first recommendation, SACHRP explained that AI/ML and Big Data (BD) research “expose the limits of the traditional concept of identifiability that serves as the basis for privacy protections under the Common Rule.” The combination of large data sets makes it possible to learn or infer information about individuals that they might not have knowingly disclosed, according to the recommendations.
Because of this, SACHRP urged HHS secretary to “follow through on the Common Rule’s commitment to regularly reexamine the meaning of identifiability in response to evolving technology and research practices. In addition, SACHRP recommended that “the Secretary consider whether identifiability remains a concept that would be recognized by research participants and the general public as useful in setting limits on federally guaranteed protections.”
In the second recommendation, SACHRP pointed out that data collection of all kinds is common in social media and is regularly monetized. The larger societal question about the ethics involved is beyond the scope of SACHRP but “sits quietly in the background of AI/ML and BD considerations under the Common Rule,” the document said, adding, “SACHRP recommends that the Secretary consider a more nuanced but explicit definition of public versus private behavior and private information that recognizes the deep changes wrought by technology since these concepts were first enshrined in regulation.”
The recommendations also noted that the original research regulations were primarily written in response to harms that occurred in biomedical research, and “their requirements disproportionately protect against physical harms that would be recognized as such by all members of society. Similarly, there is an assumed broad consensus that improving health and lessening the burden of disease is a worthwhile public good and role for the federal government.”
However, many of the risks presented by AI/ML and BD fall outside of the biomedical and health care research settings, SACHRP said, and many of their potential harms fall on groups. “How to include relevant voices in establishing or interpreting research regulations is a difficult problem that is unlikely to have a solution that will satisfy all. Nonetheless, this difficulty should not be an excuse not to explicitly consider the problem and seek a solution that tries to address group concerns fairly, particularly when research is publicly funded.”
Therefore, the third recommendation called for the secretary to “consider establishing fora and mechanisms to facilitate dialogue, and ultimately, regulatory guidance, about how the interests of groups predictably affected by AI research might be considered and protected, consistent with maintaining scientific integrity. Further, SACHRP recommends that, based on such opportunities for dialogue, the Secretary establish formal guidance to ensure that anticipated benefits as well as risks of harm of research to affected groups, particularly of research outside the biomedical domain, are considered when HHS considers funding research projects that use AI or that refine AI methods and algorithms, when such group benefits and harms may predictably be at stake.”