As the use of artificial intelligence (AI) in the administration of health care increases this year, compliance officers should keep an eye out for its evil twin. AI may make it easier for threat actors to evade some of the protections against cyberattacks even as it promises to improve efficiency and compensate for staff shortages.
The thrills and chills of AI will be a focus of 2024, experts say. There’s been a lot of movement in the use of AI platforms to power medical coding, and “2024 will be a watershed moment,” said attorney Kyle Gotchy, with King & Spalding in Sacramento. “A lot of compliance and legal considerations are wrapped up in that.”
At the same time, cybercriminals will increasingly exploit AI to make it harder for health care organizations to thwart phishing, said Barry Mathis, a principal at PYA. “Generative AI will be getting into the cybersecurity world,” he predicts. “2024 will be the year of ‘don’t trust anything and verify it twice.’” Threat actors will use deep fakes to mess with everyone’s heads. “As much as we are thrilled to pull up our phone and say, ‘Write me a country song about missing my wife,’ bad actors are using AI to create email that looks like it comes from the CEO asking for information from your W2,” Mathis said.
Although AI and cyberattacks are two of the more theatrical events on the horizon, many others will unfold (or continue) this year, according to compliance officers, attorneys and consultants. In 2024, the HHS Office of Inspector General (OIG) is expected to start releasing industry-specific compliance program content for various providers and suppliers on the heels of General Compliance Program Guidance (GCPG) it unveiled Nov. 6.[1] Hospitals are laser focused on whether there will be a material improvement in Medicare Advantage (MA) payments, processes and audits now that CMS’s rule on MA policy and technical changes took effect Jan. 1.[2] They also anticipate ongoing audits of post-acute care, cardiac procedures and other high-dollar areas. New price transparency rules took effect and more may be coming because of pending legislation, while the end of 2024 will close out certain telehealth flexibilities.
How much flex CMS will have in the future depends on a forthcoming decision from the U.S. Supreme Court about the fate of so-called Chevron deference. The way it stands, courts generally defer to agencies like CMS when a statute is ambiguous, but that’s being challenged in Loper Bright Enterprises vs. Raimondo, said attorney Andy Ruskin, with K&L Gates in Washington, D.C.[3] “The Supreme Court won’t throw out Chevron deference, but they may say, ‘We no longer should tell you that if a statute is not clear, any rationale an agency puts forward will be upheld as long as it’s not laughable,’” he explained. “The Loper case could really reach down to all interactions with CMS and regulated parties because it will go to what they can do through regulations or guidance. It’s really significant.”
In the enforcement arena, the knives are out for private equity and other investment funds involved in health care companies, experts said. Other targets include MA plans and remote monitoring, but the overarching FCA enforcement picture may be affected by a dissent in a U.S. Supreme Court decision (see story, p. 1).[4]
AI: Are Compliance and Legal at the Table?
To dig deeper into an aspect of AI, Gotchy said there’s been an advance in AI’s use to power medical coding. But he’s worried compliance and legal people may not be at the table when organizations are making AI purchasing and implementation decisions.
“AI-powered technology is already radically changing one of the most costly parts of the revenue cycle, and there are a number of tailwinds propelling the adoption of this technology,” Gotchy said. First, administrative waste weighs down the system. Second, “we are in the midst of a coding staffing crisis.” The transition is also driven by new coding demands, including the shift from ICD-9 (13,000 codes) to ICD-10 (68,000 codes). “The upshot is leaders think AI can be part of the solutions to intractable problems because they have the ability to improve accuracy, create new efficiencies, and generate cost savings and revenue capture for these organizations,” Gotchy said.
There are two main varieties. First is computer-assisted coding, which suggests codes for humans to use, and they may accept, reject or modify the recommendation, Gotchy said. The second is fully autonomous coding, which reduces or eliminates the need for human coders. “It takes unstructured data in electronic health records, automatically codes most of the claims and sends those claims directly to billing,” Gotchy said. A caveat: fully autonomous coding may not be ready yet for some providers and use cases. Although it’s the ultimate goal, “there are questions that compliance and legal stakeholders should be considering.” Here are a few of them (boiled down):
-
What level of coding accuracy is sufficient? Is AI subject to a different accuracy level than human coders?
-
How does the use of fully autonomous coding affect an organization’s certification on UB-04 and 1500 Medicare claim forms that all the information is true, accurate and complete?
-
Is the AI ready for your specialty use case?
-
What policy should an organization adopt to guide a coder’s deviation from a computer-assisted coding recommendation?
-
How will the AI adjust to changes in coding standards?
-
How are vendors using patient data to train their AI? “All these AI products are highly reliant on their access to high-quality training data in the specialty they are targeting,” Gotchy said.
-
How are you educating providers about adopting this technology?
-
Are there potential landmines with AI products for risk-adjustment (RA) diagnosis coding? “They’re coming at a time when MA plans and providers are facing increased scrutiny from the government for their efforts to increase their RA scores,” Gotchy said.