HITRUST has introduced the AI Assurance Program to assist healthcare organizations in securing and sustaining their use of AI models. They are also working on risk management guidance for AI systems, according to a Healthcare IT News article. They say,
“HITRUST this week announced the launch of its new HITRUST AI Assurance Program, designed to help healthcare organizations develop strategies for secure and sustainable use of artificial intelligence models.
The standards, and certification organization says it’s also developing forthcoming risk management guidance for AI systems.”
HITRUST has published a comprehensive AI strategy for secure AI use and risk management, encompassing elements for trustworthy AI in the HITRUST AI Assurance Program. They also plan to release AI risk management guidance and foster industry collaboration, according to a Cision PR Newswire article. They say,
“HITRUST, the information risk management, standards, and certification body, today published a comprehensive AI strategy for secure and sustainable use of AI. The strategy encompasses a series of important elements critical to delivery of trustworthy AI. The resulting HITRUST AI Assurance Program prioritizes risk management as a foundational consideration in the newly updated version 11.2 of the HITRUST CSF. HITRUST is also announcing AI risk management guidance for AI systems soon to follow, as well as the use of inheritance in support of shared responsibility for AI and an approach for industry collaboration as part of the AI Assurance Program.”
Generative AI can boost GDP, but poses risks. Trustworthy AI requires controls and scalability. HITRUST’s program aims to ensure a reliable foundation, according to a Local Profile article. They say,
“According to Goldman Sachs research, Generative AI has the potential to increase global GDP by 7% in the coming decade. Organizations are keen to revolutionize their operations and enhance productivity across various business functions to tap into the expanding realm of enterprise AI applications and unlock additional value. But, like any technology, Generative AI introduces new risks as well.
“Trustworthy AI requires an understanding of how controls are implemented by all parties and shared and a practical, scalable, recognized, and proven approach for an AI system to inherit the right controls from their service providers,” Booker said. “We are building AI Assurances on a proven system that will provide the needed scalability and inspire confidence from all relying parties, including regulators, that care about a trustworthy foundation for AI implementations.””
HITRUST will soon introduce AI risk management recommendations and inheritance concept to strengthen shared responsibility within the AI Assurance Program.