The advent of generative artificial intelligence (AI) offers the promise of tremendous leaps in productivity, new revenue, cost savings, and increased innovation. After decades of technological stagnation with respect to AI, generative AI has elevated it from the fringes to the mainstream. Companies are launching AI initiatives in legal, finance, marketing, product design, engineering, and nearly every single aspect of the organization. (Full disclosure: our company, Contoural, is launching an AI-based records management initiative.) Without overstating, AI has the potential to be transformative. Part 1 of this two-part series addresses the risks and restrictions organizations face in deploying AI and the key elements of an AI governance strategy. Part 2 will detail how to develop an AI governance function.
Regulations are evolving
Generative AI’s explosive adoption has been met with a quick response from regulators. Every week, governments across the world are proposing restrictions on how and where this new technology can be used. Wanting to become the global standard, European regulators announced restrictions on how AI can use information about individuals, as well as overall safeguards.[1]
In the U.S., states are limiting how companies can use AI to make financial decisions such as loan approvals. (Note: At least one data protection authority in the EU has also created such limits.) The Biden administration created a new standard for safety and security to protect privacy and civil rights.[2] Recently, the U.S. Securities and Exchange Commission (SEC) warned businesses against “AI washing,” or making false claims about their use of AI.[3] Companies are feeling so much pressure to show their investors that they are taking advantage of AI-powered products; therefore, the SEC felt it necessary to caution against making bogus claims.
These new regulations are just the beginning, as we expect to see many countries and states developing new rules limiting AI this year. The forecast for the AI regulatory environment is both rushed and a bit messy.
Emerging AI regulatory requirements
Governments across the world are quickly enacting a variety of AI regulations. Here is a small sampling of different requirements.
Europe: Europe considers itself a leader in AI regulation and has created a legal, regulatory framework largely based on potential risk posed by AI systems. High-risk systems would be more strictly regulated, including being required to carry out a rights impact assessment. Lower-risk systems would have fewer regulations, including disclosure requirements.
U.S. federal government: The Biden administration issued an executive order on AI. It requires that AI developers share results of safety tests with regulators, provide guidelines for the federal government’s own use of AI, and prohibit AI-driven discrimination.
U.S. states: Numerous states have either proposed or enacted a variety of AI regulations focusing on disclosure, consumer profiling, unfair discrimination in financial services and hiring, facial recognition, and registries.
ISO24001: The International Organization for Standardization recently released a framework for organizations involved in developing, providing, or using AI-based products or services.
China: China has developed a series of AI regulations, including a government registry and rules limiting synthetically generated images, video, audio, and text.
These requirements—and the number of jurisdictions creating them—are certain to grow in the coming year.