This audio is auto-generated. Please let us know if you have feedback.

René Quashie is the first Vice President of Policy & Regulatory Affairs for Digital Health at the Consumer Technology Association (CTA). Prior to CTA, Quashie spent two decades in private law practice at several national firms, focusing his work on digital health and privacy. He earned his law degree from George Washington University.

Artificial intelligence is poised to revolutionize the way we diagnose and treat illness and disease.

If that sounds like a bold claim, it shouldn’t. Already, AI tools are helping to improve early-stage cancer detection, reduce errors in medication dosing and provide robotic assistance in common surgeries. In some medical contexts, AI systems even outperform experienced doctors. 

Two recent studies highlight the future potential of AI as a healthcare tool. In one, researchers from Boston’s Mass General Bridgham found that ChatGPT could correctly diagnose more than 70% of the patients in the case studies used to train today’s medical students. In another, a machine learning model successfully identified the severity of illness among older adults in ICUs across more than 170 hospitals.  

These advances deserve celebration. They promise to improve outcomes for millions of Americans and help ease the burden on healthcare professionals overwhelming reporting overwork and burnout.

But, as AI plays an increasingly integral role in our health systems, we must ensure that it serves all Americans. Inequities and discrimination already exist in American healthcare systems, and AI technologies can both reflect and magnify existing inequity. As just one example, a study from 2019 found that a healthcare risk-prediction algorithm used by major insurers systematically under-rated the health risks of Black patients.  

Fundamentally, that algorithm was biased. It’s a word we use more often for people than for systems, but as AI plays a growing role in our everyday lives, efforts to mitigate or reduce bias become increasingly urgent. These efforts, undertaken by researchers, developers and users, can help address the complex and nuanced problem that arise when data creation, collection and analysis collide with real-world prejudices or disparities. While eliminating bias entirely may not be possible, working to reduce its impact can help ensure that AI tools deliver on their promise for all of us. 

What does that mean? First, we have to start with the right data. AI algorithm developers should ensure data is representative, includes key demographic elements like race, gender, socioeconomic background and disability, and that data gaps are identified and used to inform model limits. Representation should extend beyond the data as well. It’s important for development teams to include a variety of perspectives and backgrounds and incorporate cultural sensitivity, ethics and accountability into decision making processes. 

Bias evaluation isn’t a one-time, check-the-box exercise. That’s especially true in health care, where AI system recommendations can inform the life-altering, high stakes decisions made by health care professionals. Audits for bias should take place both during development and as the systems are rolled out and implemented in clinical settings. As the technology develops, we should also be open to reassessing the initial assumptions and subsequently re-training algorithms to meet the intended output, recognizing that at times there may be a tradeoff between accuracy and fairness. 

Americans are optimistic about the possibilities offered by AI, but they’re also anxious. In a recent CTA survey, just under half of U.S. adults familiar with AI technologies said they would feel comfortable having AI make a medical diagnosis for them. Even fewer are comfortable with AI performing surgery, with less than a third, or 29%, reporting they are open to the idea. Changing that, and opening the door to AI’s lifesaving potential, means embracing transparency — whether that means publishing datasets and source code, or simply disclosing to patients the role of AI in their evaluation and treatment. 

We should embrace efforts by researchers, developers and hospitals to put guardrails around healthcare in AI that protect patients while unlocking innovation. That includes the development of standards and best practices at both the national and international level. It’s also important to look beyond the doom and gloom headlines and recognize that, if used correctly, AI can make health care more equitable. Traditionally, medical research relied on highly homogenous — white, male and European — patient groups.  When AI is trained on good data, and incorporates health information from a diverse range of patients, it can help move us towards a better understanding of how treatments and interventions work for broader patient populations. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here