Setting up a compliance program and realistically assessing risks is hard enough when a regulatory environment is known and laws establish boundaries. But today, in the context of new digital technologies—particularly those involving artificial intelligence (AI)—the job of a compliance professional has gotten much harder.
Every day, the news is full of stories about AI’s capabilities and shortcomings, calls for regulation, commitments by private companies to do the right thing, and so on. But what does that all mean in terms of concrete steps a compliance professional can take to protect corporate interests?
If you look up “compliance” and “best practices” in the context of AI, you will come up with a series of articles discussing the need for each of those things without necessarily describing a clear path to getting there. Words such as “accountability,” “transparency,” “trustworthy,” “fair,” “responsible,” and “ethical” are sprinkled liberally throughout. What those words mean, however, is less clear. And how they can be made actionable is frequently not addressed at all.
I am a former federal judge who presided over trials where a lack of compliance landed companies and people in hot water. Over the last several years—before ChatGPT and generative AI (often referred to as “GAI”) hit public consciousness—I have been advising companies on how to construct compliance programs in the face of many unknowns. Today, GAI’s transformative potential has placed AI front and center for every regulatory body any business deals with. In this article, I want to provide some practical advice on how to create basic compliance protections.