The World Health Organization (WHO) has published a new document highlighting important regulatory considerations regarding artificial intelligence (AI) in the healthcare sector. The publication emphasizes the significance of ensuring the safety and effectiveness of AI systems, promptly making them available to those in need, and promoting dialogue among various stakeholders including developers, regulators, manufacturers, healthcare workers, and patients.
With the increasing availability of healthcare data and the rapid advancement of analytic techniques such as machine learning, logic-based approaches, and statistical methods, AI tools have the potential to revolutionize the field of healthcare. WHO recognizes that AI can enhance health outcomes by reinforcing clinical trials, improving medical diagnosis, treatment, self-care, and person-centered care, as well as supplementing the knowledge, skills, and competencies of healthcare professionals. For instance, AI can be particularly beneficial in settings with a shortage of medical specialists, facilitating the interpretation of retinal scans and radiology images, among other applications.
However, AI technologies, including large language models, are being rapidly deployed without a comprehensive understanding of their potential performance, which can either benefit or harm end-users including healthcare professionals and patients. When utilizing health data, AI systems may access sensitive personal information, necessitating robust legal and regulatory frameworks to ensure privacy, security, and data integrity, which this publication aims to assist in establishing and maintaining.
“Artificial intelligence holds great promise for health, but also entails serious challenges, such as unethical data collection, cybersecurity threats, and the perpetuation of biases or misinformation,” stated Dr. Tedros Adhanom Ghebreyesus, WHO Director-General. “This new guidance will help countries effectively regulate AI, harnessing its potential in areas such as cancer treatment and tuberculosis detection, while minimizing risks.”
In response to the growing need for responsible management of the rapid proliferation of AI health technologies, the publication outlines six key areas for the regulation of AI in healthcare:
- The importance of transparency and documentation to foster trust, such as through the documentation of the entire product lifecycle and tracking of development processes.
- Comprehensive addressing of issues related to risk management, including factors such as intended use, continuous learning, human interventions, training models, and cybersecurity threats, while striving to simplify models as much as possible.
- External validation of data and clarity regarding the intended use of AI to ensure safety and facilitate regulation.
- A commitment to data quality by rigorously evaluating systems before release to prevent the amplification of biases and errors.
- Addressing the challenges posed by regulations such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States of America, with a focus on understanding jurisdiction and consent requirements to safeguard privacy and data protection.
- Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners to ensure compliance with regulations throughout the lifespan of products and services.
AI systems are complex and rely not only on the code they are built with but also on the data used for training, which is derived from clinical settings and user interactions. Improved regulation can help mitigate the risks of AI amplifying biases present in training data.
For instance, AI models may struggle to accurately represent the diversity of populations, leading to biases, inaccuracies, or failures. To mitigate these risks, regulations can ensure that the attributes of individuals featured in training data, such as gender, race, and ethnicity, are reported and datasets intentionally made representative.
The new WHO publication aims to outline fundamental principles that governments and regulatory authorities can adhere to in developing new guidance or adapting existing guidance on AI at the national or regional levels.