The fast rise of artificial intelligence (AI) has accelerated board compliance and ethics reporting to unprecedented levels of importance. Generative AI has become a central focus for the board reporting agenda of chief compliance officers (CCOs)—particularly since the introduction of ChatGPT in November 2022. Board members now recognize the need to assess the ever-evolving AI landscape to fulfill their fiduciary obligations. As companies navigate the complex growth and regulatory landscape, it becomes imperative to establish a mutual understanding with boards of an organizational process required to develop risk frameworks that enable optimal utilization of AI.
The constantly changing risks associated with AI compliance and ethics present a challenge for CCOs, like navigating a new land for compliance and ethics board reporting. Upon closer examination, it becomes clear that existing compliance and ethics reporting tools can be repurposed to initiate this journey.
Simultaneously, the growing focus on compliance and ethics risks across industries and business areas has provided a unique opportunity for compliance and ethics teams to elevate their game. It has highlighted their significant roles in organizations and created momentum toward a critical inflection point for a long-awaited and well-balanced valuation of compliance risk management skills, technological implementation expertise, and organizational knowledge. The mandate for CCOs is now to maximally leverage and build upon the existing compliance and ethics expertise to effectively navigate the layers of AI and establish robust board oversight.
This article aims to delve into the practical aspects of AI compliance and ethics reporting, highlighting CCOs’ challenges and offering recommendations to maximize reporting effectiveness to the board. It focuses on the new aspects of AI compliance and ethics reporting and how they can be addressed by optimizing traditional compliance and ethics risk techniques and tools. However, it is essential to acknowledge that CCOs alone cannot resolve all AI controversies or unknown implications. The multidimensional challenge will require multilateral governmental and business collaboration for years.
The article reviews the top three compliance instruments CCOs can use to initiate systematic reporting on AI to the boards:
-
Mastering regulatory compliance for AI: The section focuses on the significance of building an agile legislative compliance and ethics baseline for AI and translating it into business controls that aim to optimally leverage the existing compliance and ethics frameworks.
-
Maximizing compliance and ethics integration in AI: The subsequent section discusses key compliance and ethics domains where substantial overlap with evolving AI compliance and ethics requirements can be observed and how it is crucial to understand risk shifts based on AI triggers across these areas.
-
Maturing AI roadmap for compliance teams: Lastly, it is fundamental for CCOs to actively participate in the AI discussion by developing an AI strategy for their teams. Within compliance reporting, it is necessary for board members to observe that CCOs are proactively addressing the potential of AI for their teams rather than waiting for external signals to initiate their own AI endeavors.
Mastering regulatory compliance for AI
Ensuring sustainable regulatory compliance for AI is the top priority for CCOs to mitigate risks and enable robust board oversight. Its absence will hinder compliance teams from providing fast and consistent guidance to business partners, which may have hefty implications for bringing forward swift AI solutions to their organizations and customers, ultimately determining who can succeed in this changed business environment. An ad hoc or fragmented approach will impose major risks on companies and make it challenging to have sound risk governance.
Benefitting from a baseline
To achieve sustainable regulatory compliance for AI risks, organizations must develop an AI legislative compliance and ethics baseline.[1] The baseline summarizes the key requirements of applicable laws, regulations, and standards, providing a unified view of regulatory compliance expectations across operating jurisdictions, companies, and their respective entities. Many organizations can leverage existing regulatory risk attestation processes and expand them to include AI risk controls as part of operationalization.[2] The baseline serves as a foundation for:
-
Keeping the board informed about the organization’s adherence to applicable laws, regulations, and industry and societal standards.
-
Supporting the board to perceive regulatory compliance as a journey with varying maturity stages and creating transparency around this agile approach.
-
Identifying compliance gaps, potential regulatory issues, and the principal measures to address them, including escalating material risks and any risk appetite issues to the board.
-
Allowing the board to assess the organization’s regulatory compliance state, ensure appropriate measures are in place, and address any compliance- or ethics-related resourcing or other oversight concerns.
-
Supporting board members in making reasonable risk governance and appetite decisions despite the changing legislative and regulatory landscape.
Conquering the evolving environment
The rapidly changing landscape of AI legislation and regulation is evident in the 37 AI-related bills passed into law in 2022 in 127 different countries, as highlighted by Stanford University’s 2023 AI Index.[3] This global trend is expected to continue, creating a constant wave of new standards and requirements. The selection of AI laws, regulations, and standards to be included in an organizational baseline will vary depending on companies’ geographical footprint and their willingness and ability to proactively adhere to important developments in AI regulatory frameworks at global, regional, and local levels.
It is encouraging, however, that companies and boards can draw from the knowledge gained through previous legislative and regulatory initiatives when shaping their baseline approaches. A prime example is data protection and privacy discussion globally. The implications of the European Union (EU) General Data Protection Regulation (GDPR) from 2018, with its extraterritorial scope, have demonstrated how powerful individual legislation can be on a global scale. The guiding principles of GDPR have been incorporated into local privacy and personal information frameworks worldwide.
Drawing parallels between the AI Act and GDPR
It is evident the EU will significantly influence legislative discussions on AI. In June, the European Parliament passed its version of the EU AI Act after nearly two years of deliberation. This paved the way for the final debate in the EU, with the target date to finalize the act by the end of 2023.[4] Similar to GDPR, the AI Act is expected to have an extraterritorial scope affecting companies outside the EU to varying degrees. This is projected to position the AI Act as a benchmark AI law that other jurisdictions may look toward when developing their own legislative initiatives.[5] In addition to the EU, Brazil and Canada are in the race to adopt a general law that applies to AI systems and are of great interest for baselining efforts.[6]
For baseline development purposes, it is vital for CCOs to start measuring any local or regional developments against the AI Act expectations and understand where it may stipulate a different standard and when it may make sense to adopt these emerging expectations well ahead of time. The board reporting will benefit from a robust overview of ongoing developments highlighting the laws of relevance, their expected in-force schedule, and their level of anticipated impact. The legislative baseline will offer the opportunity for CCOs to transparently report on the ongoing progress per regulatory area against the target maturity grade of AI, supporting an agile AI approach. To effectively address evolving AI legislative and regulatory proposals that apply to the company, it is paramount to keep board members informed and ensure alignment on baseline expectations.