Artificial intelligence (AI) has seen significant advancement and adoption over the past few decades. 2023 alone was a defining year for AI with the emergence of a sophisticated generative AI tool, ChatGPT, which has made AI more easily accessible to the public, speaking particularly to those with low or no coding experience.[1] As a large, language model-based chatbot that can compose written content or engage in human-like conversational dialogue, the intention of ChatGPT is to enhance productivity and efficiency. More broadly, AI is being implemented across most businesses today to streamline processes and understand data trends, with many leaders feeling that AI will substantially transform their industry and shape the future of work.[2]
AI presents opportunity and risk
While AI brings significant time and cost savings to businesses and employees, the degree to which these technologies have evolved brings significant risk and can be detrimental if not governed properly. Data privacy is one area where risk can be at its highest. For instance, AI tools used for automated decision-making may violate not only data privacy laws but also sector-specific laws—such as the Fair Credit Reporting Act, Equal Credit Opportunity Act in banking, and Title VII of the Civil Rights Act in the context of hiring and promotions. Additionally, many AI tools do not comply with current data privacy rules, such as those included under Europe’s General Data Protection Regulation.[3] Lastly, businesses also run the risk of employees improperly disclosing personal or business data into tools like ChatGPT.
As AI continues to catch the eyes of business leaders looking to enhance processes, it’s also top of mind for policymakers as well. At the federal level, the Biden administration implemented the AI Bill of Rights in 2022,[4] followed by an Executive Order in 2023,[5] both provide guidance on how to ensure AI is safe, secure, and trustworthy. There’s also been action at the state level: California proposed the requirement of data impact assessments for automated decision-making,[6] and Connecticut enacted a bill that regulates how AI is used for personal data privacy.[7]
While these laws directly regulate the intersection of data privacy and AI, the reality is that many organizations are unprepared to comply with them. Further, taking a reactive approach to AI regulation is no longer a feasible or safe option, putting compliance leaders in a position today where they must create a proactive AI governance strategy that avoids the risk of unlawful conduct and the penalties that follow. While it may seem like a daunting task with so many existing and proposed laws currently in flux, there are essential steps they can take today that can significantly reduce data privacy concerns.