ADVERTISEMENT

Can AI enhance mental health treatment?

Three questions for clinicians

• Where is there a risk of bias?

There has been much discussion around the issue of bias within large language models in particular, since these programs will inherit any bias from the data points or text used to train them. However, there is often little to no visibility into how these models are trained, the algorithms they rely on, and how efficacy is measured.

This is especially concerning within the mental health care space, where bias can contribute to lower-quality care based on a patient’s race, gender or other characteristics. One systemic review published in JAMA Network Open found that most of the AI models used for psychiatric diagnoses that have been studied had a high overall risk of bias — which can lead to outputs that are misleading or incorrect, which can be dangerous in the healthcare field.

It’s important to keep the risk of bias top-of-mind when exploring AI tools and consider whether a tool would pose any direct harm to patients. Clinicians should have active oversight with any use of AI and, ultimately, consider an AI tool’s outputs alongside their own insights, expertise, and instincts.
 

Clinicians have the power to shape AI’s impact

While there is plenty to be excited about as these new tools develop, clinicians should explore AI with an eye toward the risks as well as the rewards. Practitioners have a significant opportunity to help shape how this technology develops by making informed decisions about which products to invest in and holding tech companies accountable. By educating patients, prioritizing informed consent, and seeking ways to augment their work that ultimately improve quality and scale of care, clinicians can help ensure positive outcomes while minimizing unintended consequences.

Dr. Patel-Dunn is a psychiatrist and chief medical officer at Lifestance Health, Scottsdale, Ariz.