Technology’s exponential development and use in healthcare provides potentially significant benefits for behavioral health patients but also raises ethical and compliance concerns. The most recent technological advance involves the use of artificial intelligence (AI). Unfortunately, laws, rules, and regulations do not change as quickly as technology. Compliance professionals will want to keep in close contact with departments considering using AI—including behavioral health—for both ethical and confidentiality concerns. Mental health stigma is alive and well and can create issues for employment and other activities for those suffering from mental health conditions. When compliance collaborates with departments such as behavioral health, those concerns can be minimized.
The pandemic brought mental health issues to the forefront. Both the Biden administration and the Substance Abuse and Mental Health Services Administration (SAMHSA) have put forth plans to address identified issues. The Biden administration is working to improve insurance coverage for mental health, while SAMHSA is working to strengthen the release of information requirements, especially for substance use disorder. The use of AI will factor in both plans as a potentially cost-effective way to address mental health concerns.
AI benefits
So, what is AI? According to IBM, “Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-mailing capabilities of the human mind.”[1] As more commonly understood, it is having computers “thinking” like humans.
According to research, AI appears to provide several improvements in treating mental health conditions. A study published in the Journal of Medical Internet Research found that AI “was associated with significant improvements in substance use, confidence, cravings, depression, and anxiety.”[2] The authors believe that the benefits of AI are the ability to compare and analyze large amounts of data as well as increase “equity and access” to mental health treatment.[3] One disadvantage identified is predictability with ethnic groups who may not have access to mental healthcare. Lack of access would lead to a lack of data for AI to analyze, making it less likely to accurately predict issues in that population.[4]
Several studies on groups where data is available found a high level of accuracy in predicting suicidal thoughts as well as significant mental health issues.[5] A Vanderbilt study found that with access to medical records information, demographics, and admissions information, AI had an 80% accuracy rate in predicting whether an individual would die by suicide.[6] With all these benefits, it seems that AI should be pursued; however, at the same time, there are numerous ethical and privacy issues to be considered and addressed.