Implementing Trustworthy AI in VA High Reliability Health Care Organizations
Background: Artificial intelligence (AI) has great potential to improve health care quality, safety, efficiency, and access. However, the widespread adoption of health care AI needs to catch up to other sectors. Challenges, including data limitations, misaligned incentives, and organizational obstacles, have hindered implementation. Strategic demonstrations, partnerships, aligned incentives, and continued investment are needed to enable responsible adoption of AI. High reliability health care organizations offer insights into safely implementing major initiatives through frameworks like the Patient Safety Adoption Framework, which provides practical guidance on leadership, culture, process, measurement, and person-centeredness to successfully adopt safety practices. High reliability health care organizations ensure consistently safe and high quality care through a culture focused on reliability, accountability, and learning from errors and near misses.
Observations: The Veterans Health Administration applied a high reliability health care model to instill safety principles and improve outcomes. As the use of AI becomes more widespread, ensuring its ethical development is crucial to avoiding new risks and harm. The US Department of Veterans Affairs National AI Institute proposed a Trustworthy AI Framework tailored for federal health care with 6 principles: purposeful, effective and safe, secure and private, fair and equitable, transparent and explainable, and accountable and monitored. This aims to manage risks and build trust.
Conclusions: Combining these AI principles with high reliability safety principles can enable successful, trustworthy AI that improves health care quality, safety, efficiency, and access. Overcoming AI adoption barriers will require strategic efforts, partnerships, and investment to implement AI responsibly, safely, and equitably based on the health care context.
Trustworthy AI Framework
AI systems are growing more powerful and widespread, including in health care. Unfortunately, irresponsible AI can introduce new harm. ChatGPT and other large language models, for example, sometimes are known to provide erroneous information in a compelling way. Clinicians and patients who use such programs can act on such information, which would lead to unforeseen negative consequences. Several frameworks on ethical AI have come from governmental groups.6-9 In 2023, the VA National AI Institute suggested a Trustworthy AI Framework based on core principles tailored for federal health care. The framework has 6 key principles: purposeful, effective and safe, secure and private, fair and equitable, transparent and explainable, and accountable and monitored (Table 2).10
First, AI must clearly help veterans while minimizing risks. To ensure purpose, the VA will assess patient and clinician needs and design AI that targets meaningful problems to avoid scope creep or feature bloat. For example, adding new features to the AI software after release can clutter and complicate the interface, making it difficult to use. Rigorous testing will confirm that AI meets intent prior to deployment. Second, AI is designed and checked for effectiveness, safety, and reliability. The VA pledges to monitor AI’s impact to ensure it performs as expected without unintended consequences. Algorithms will be stress tested across representative datasets and approval processes will screen for safety issues. Third, AI models are secured from vulnerabilities and misuse. Technical controls will prevent unauthorized access or changes to AI systems. Audits will check for appropriate internal usage per policies. Continual patches and upgrades will maintain security. Fourth, the VA manages AI for fairness, avoiding bias. They will proactively assess datasets and algorithms for potential biases based on protected attributes like race, gender, or age. Biased outputs will be addressed through techniques such as data augmentation, reweighting, and algorithm tweaks. Fifth, transparency explains AI’s role in care. Documentation will detail an AI system’s data sources, methodology, testing, limitations, and integration with clinical workflows. Clinicians and patients will receive education on interpreting AI outputs. Finally, the VA pledges to closely monitor AI systems to sustain trust. The VA will establish oversight processes to quickly identify any declines in reliability or unfair impacts on subgroups. AI models will be retrained as needed based on incoming data patterns.
Each Trustworthy AI Framework principle connects to others in existing frameworks. The purpose principle aligns with human-centric AI focused on benefits. Effectiveness and safety link to technical robustness and risk management principles. Security maps to privacy protection principles. Fairness connects to principles of avoiding bias and discrimination. Transparency corresponds with accountable and explainable AI. Monitoring and accountability tie back to governance principles. Overall, the VA framework aims to guide ethical AI based on context. It offers a model for managing risks and building trust in health care AI.
Combining VA principles with high-reliability safety principles can ensure that AI benefits veterans. The leadership and culture aspects will drive commitment to trustworthy AI practices. Leaders will communicate the importance of responsible AI through words and actions. Culture surveys can assess baseline awareness of AI ethics issues to target education. AI security and fairness will be emphasized as safety critical. The process aspect will institute policies and procedures to uphold AI principles through the project lifecycle. For example, structured testing processes will validate safety. Measurement will collect data on principles like transparency and fairness. Dashboards can track metrics like explainability and biases. A patient-centered approach will incorporate veteran perspectives on AI through participatory design and advisory councils. They can give input on AI explainability and potential biases based on their diverse backgrounds.
Conclusions
Joint principles will lead to successful AI that improves care while proactively managing risks. Involve leaders to stress the necessity of eliminating biases. Build security into the AI development process. Co-design AI transparency features with end users. Closely monitor the impact of AI across safety, fairness, and other principles. Adhering to both Trustworthy AI and high reliability organizations principles will earn veterans’ confidence. Health care organizations like the VA can integrate ethical AI safely via established frameworks. With responsible design and implementation, AI’s potential to enhance care quality, safety, and access can be realized.
Acknowledgments
We would like to acknowledge Joshua Mueller, Theo Tiffney, John Zachary, and Gil Alterovitz for their excellent work creating the VA Trustworthy Principles. This material is the result of work supported by resources and the use of facilities at the James A. Haley Veterans’ Hospital.
