Photo: Laurence Dutton/Getty Images

The American Hospital Association has said it supports a careful review of artificial intelligence regulations in healthcare, but favors a “sector-specific” approach such as has been applied to health-focused software applications.

In a letter to Dr. Bill Cassidy of the Senate Committee on Health, Education, Labor and Pensions, the AHA said that AI innovation is critical to both mitigate patient risk and allow hospitals and clinicians to harness the potential of these technologies.

A sector-specific approach, the agency contended, would allow oversight organizations to tailor the specifics of their regulation to the particular risks associated with the uses of the software.

AI is not a monolithic technology, and a one-size-fits-all approach could stifle innovation in patient care and hospital operations and could prove inadequate at addressing the risks to safety and privacy that are unique to healthcare,” the group wrote. “Just as software is regulated based on its use in different sectors, the AHA urges Congress to consider regulating AI use in a similar manner.”

WHAT’S THE IMPACT?

Existing frameworks, according to the AHA, provide a solid foundation for this approach, having been tested and refined over time. Citing efficiency, it recommended adapting these frameworks to accommodate the idiosyncrasies inherent to AI.

One thing that’s clear, in the AHA’s view, is that existing regulatory frameworks are lacking, given the novel challenges posed by healthcare AI. 

AI systems that provide diagnosis, prognosis or specific treatment recommendations for patients could offer significant positive impacts on health outcomes and quality of life,” the group wrote. “However, these systems may also raise ethical, legal and social issues such as privacy, accountability, transparency, bias and consent that are not addressed in the existing medical device framework, including even the Food and Drug Administration’s more recent Software as Medical Device guidance.”

On top of that, AI systems that generate or analyze health data may create new opportunities and threats for data security, ownership and governance, the AHA said. While these systems can enable personalized care, they can also potentially expose sensitive patient information, making it ripe for possible misuse.

In a white paper this month on AI regulation, Cassidy recognized the potential of AI to streamline healthcare administration, but raised questions about how patient information will be used and how strong privacy protections would actually be. 

He said leveraging individual health data is critical in delivering certain care outcomes, “but Congress must ensure that AI tools are not used to deny patients access to care or use patient information for purposes that a patient has not given consent for.”

THE LARGER TREND

Earlier this year, the World Health Organization called for caution in using artificial intelligence-generated large language model tools (LLMs) such as ChatGPT, Bard, Bert and others that imitate understanding, processing and human communication.

LLMs increasing use for health-related purposes raises concerns for patient safety, WHO said. The precipitous adoption of untested systems could lead to errors by healthcare workers and cause harm to patients.

WHO proposed that LLM concerns be addressed, and clear evidence of benefit be measured, before LLMs find widespread use in routine healthcare and medicine – whether by individuals, care providers or health system administrators and policymakers.
 

Twitter: @JELagasse
Email the writer: Jeff.Lagasse@himssmedia.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here