Newly released guidance provides non-binding considerations for healthcare organizations adopting, evaluating, and managing artificial intelligence-driven tools in clinical and operational settings. “Responsible Use of Artificial Intelligence in Healthcare,” published by the Joint Commission (JC) and the Coalition for Health AI (CHAI), sets out principles focused on seven key areas intended to promote safe, effective, equitable, and ethically sound use of AI in healthcare:
- AI Policy and Governance Structures – Establish policies for implementing and using AI and a governance structure that keeps the hospital’s governing body aware of uses, outcomes, and potential drawbacks of health AI tools. Include structure and oversight mechanisms and expertise from key stakeholders, including, as appropriate, executive leadership, regulatory/ethical compliance, IT, cybersecurity, safety/incident reporting, and relevant clinical/operational areas, to ensure the needs of all impacted populations are considered.
- Patient Privacy and Transparency – Implement policies around data access, use, and protection, and develop a mechanism to disclose and educate patients and their families on the use and benefits of AI tools. In addition, notify patients when AI directly impacts their care and how their data may be used in the context of AI.
- Data Security and Data Use Protections – Protect patient data by implementing clear data security policies and limiting the permissible uses of exported data. Even when data is properly de-identified under HIPAA, organizations should apply strong protections and contractual guardrails, especially regarding potential re-identification risks and any applicable state or contractual obligations.
- Ongoing Quality Monitoring – Set up a process to monitor and evaluate the safe performance of health AI tools. During procurement, require information from vendors on how the AI tool was tested and validated, how biases were evaluated/mitigated, and whether vendor is willing to tune or validate a sample that is representative of the deployment context. Establish post-deployment monitoring and ongoing validating and testing activities to be performed using a risk-based approach scaled to the setting and population, considering proximity to patient care decisions.
- Voluntary, Blinded Reporting of AI Safety-Related Events – Allow for knowledge sharing across the healthcare industry by voluntarily reporting AI-related safety events to an independent organization that can share information to the field. Existing structures for reporting and assessing safety and quality issues include JC’s sentinel-event process and confidential reporting to federally listed Patient Safety Organizations.
- Risk and Bias Assessment – Establish a process to identify and address risks and biases in health AI tools, particularly if they pose a threat to patient safety or limit access to care. Gather information from vendors and conduct internal checks for biases when validating local data and post-deployment. Consider whether the training datasets were fit for purpose and representative, the AI tools underwent bias detection assessment, and the algorithms were tested for the populations served.
- Education and Training – Provide training and education materials to healthcare providers using AI tools so they understand the benefits and can help prevent potential risks.
While these recommendations do not carry regulatory force, they reflect themes found in other authoritative contexts, including FDA guidance on AI-enabled medical devices and drug discovery systems. In particular, the emphasis on risk evaluation, bias mitigation and data representativeness, continuous monitoring, transparency, and cybersecurity mirrors the direction of prior FDA guidance.
The JC and CHAI guidance contributes to the growing consensus around responsible AI use in healthcare, signaling that accreditation bodies, regulators, and professional associations may expect alignment with similar principles over time. Organizations developing or deploying AI tools in clinical workflows or related healthcare decision-making should monitor how this industry dialogue continues to evolve and consider taking reasonable, risk-based steps to align with these principles. This may involve reviewing internal governance structures, documenting decision-making processes surrounding AI adoption, ensuring data provenance, validating and monitoring AI performance in real-world use, and adopting strategies to detect and address bias.
AI-tool providers supporting healthcare organizations will likewise play an essential role in operationalizing these principles. Providers should anticipate customer inquiries regarding validation data, bias testing, and post-deployment monitoring, and may wish to proactively document their internal governance, data management, and model-update practices. Developing, among other things, bias-testing summaries and audit-ready documentation could not only facilitate customer compliance but also position vendors as preferred partners in a tightening regulatory environment. Early alignment with JC/CHAI principles, and other emerging parallel frameworks, may ultimately provide a competitive advantage as accreditation and oversight bodies move toward more formalized expectations for responsible AI governance.
Together, these developments underscore a broader industry shift toward formalized expectations for AI governance, accountability, and transparency in healthcare. By proactively aligning internal policies, contracting frameworks, and vendor relationships with the JC/CHAI principles, both healthcare organizations and their AI-tool providers can strengthen patient trust, reduce operational and legal risk, and position themselves ahead of emerging regulatory and accreditation trends.