AI in Medicine Sparks Excitement, Concerns From Experts
AI’s Black Box
AI systems, particularly those utilizing deep learning, often function as “black boxes,” meaning their internal decision-making processes are opaque and difficult to interpret. Dr. Hatherley said this lack of transparency raises significant concerns about trust and accountability in clinical decision-making.
While Explainable AI methods have been developed to offer insights into how these systems generate their recommendations, these explanations frequently fail to capture the reasoning process entirely. Dr. Hatherley explained that this is similar to using a pharmaceutical medicine without a clear understanding of the mechanisms for which it works.
This opacity in AI decision-making can lead to mistrust among clinicians and patients, limiting its effective use in healthcare. “We don’t really know how to interpret the information it provides,” Ms. Hernandez said.
She said while younger clinicians might be more open to testing the waters with AI tools, older practitioners still prefer to trust their own senses while looking at a patient as a whole and observing the evolution of their disease. “They are not just ticking boxes. They interpret all these variables together to make a medical decision,” she said.
“I am really optimistic about the future of AI,” Dr. Hatherley concluded. “There are still many challenges to overcome, but, ultimately, it’s not enough to talk about how AI should be adapted to human beings. We also need to talk about how humans should adapt to AI.”
Dr. Hatherley, Dr. Alderman, and Ms. Hernandez have reported no relevant financial relationships.
A version of this article appeared on Medscape.com.