Photo: Westend61/Getty Images

Artificial intelligence, as an emergent technology, is poised to transform healthcare in a number of ways. There are a lot of expectations being placed on the technology, and in some cases it can deliver. It has shown promise in crunching large amounts of data and in informing clinical decisions and insights, and, while it’s not a magical solution, more and more applications for AI are starting to emerge.

But because those applications are so disparate and wide-ranging, it can be a complex landscape for healthcare leaders, who want to remain on the forefront of technological progress while applying the tech in an ethical manner. 

Currently, AI offers two paths: automation and augmentation.

Nicholas Luthy, chief product officer at healthcare AI company Medsitter, said that the evolution of AI, and all software, goes through stages. In AI’s case, it starts with the processing of text into structured data and predictable information. AI eats information and needs to learn, but it can’t determine what’s right or wrong by itself – so a human being needs to have their hands on the wheel. 

“As you move up the complexity of the data set, it becomes much more challenging, and daunting, to produce predictions that are accurate,” said Luthy. “We really focus on trying to ask domain-specific questions. As we build our AI model we think about what kinds of answers we want, so we can ask it the right questions.”

There are positives and negatives to both automation and augmentation. Automation, said Luthy, is taking redundant administrative tasks and making them more streamlined, helping people to do more in less time. Thanks to AI-centered tech such as natural language processing, doctors are spending less time handling data and more time tending to their patients.

When it comes to augmentation, AI can enhance the clinical team’s capability to derive better outcomes for their patients, such as with higher accuracy in areas like medical imaging and brain scans.

PROS AND CONS

Many healthcare leaders have been keen to adopt AI because of the perceived benefits – according to Medical Economics, one of the most significant of those benefits is improved diagnostic speed and accuracy, which can make it easier for providers to diagnose and treat diseases. Using AI to analyze X-rays, MRI scans and other medical images, for example, can identify patterns and anomalies that a human might miss.

AI algorithms can also provide real-time data and recommendations, helping providers respond quickly to potential emergencies, and they can assist in managing chronic conditions.

The technology also has a potential role to play in increasing access to care, such as in the case of telehealth, which – with an AI boost – can provide remote consultations and diagnoses, eliminating the need for patients to travel.

However, Medical Economics pointed out that there can be potential risks, particularly when it comes to security and privacy. One of the biggest risks is the potential for data breaches, since large quantities of patient data are often targets for cybercriminals. Other types of unique AI attacks include data input poisoning – in which a bad actor inserts bad data into a training set, affecting the model’s output – and model extraction – in which an adversary might extract enough information about the algorithm to create a substitute model.

Already, AI has been used in a malicious fashion, typically to spread propaganda or target certain populations with scams. ChatGPT, a natural language chat model, has been used to write phishing emails, for example, according to Medical Economics.

Mitigating these risks usually entails traditional methods of ensuring the security of patient data, including risk analyses and clear policies for the collection and use of data to ensure privacy tenets are not being violated. 

According to Statista, the healthcare AI market, valued at $11 billion in 2021, is projected to be worth $187 billion in 2030. Better machine learning algorithms, more access to data, cheaper hardware and 5G connection speeds are all contributing to the increased application of AI within healthcare.

Signs point to AI being the future of healthcare. But to implement it properly, healthcare leaders will need to take an ethical approach.

ETHICS

“With everything, there’s a balance,” said Luthy. “With great power comes great responsibility. That really sets the tone. We’ve got a technology that’s outpacing the common understanding of regulation. It’s outpacing or even challenging data laws. You’ve got an industry focused around patient privacy and protecting information, and then you have AI, which is one of the most data-hungry, data-demanding technologies in existence, and it wants to consume all that data. How do you do that without violating those privacy tenets?

“Some of that relies on organizations to have good ethical standards,” he said. “There probably needs to be regulation, some clearer rules.”

Regulation from a governmental standpoint is the likeliest path to achieving a set of uniform ethical standards. This can be supplemented by, for example, competitors coming together to agree on an ethical framework, which in an ideal scenario would prohibit the unnecessary exposure of patient data.

A sensible approach to this burgeoning technology, said Luthy, is to use AI to build trust in the human elements of the healthcare system. The technology can be used, for instance, to determine whether human observers are truly observing; this can be achieved through facial recognition with an AI boost.

On the flip side of that is the patient room. Leaders in healthcare technology are currently determining how to teach AI to detect a patient without showing it images of the patient’s face, which is protected health information. 

There are ways to tackle that challenge with positive results, but organizations have to commit to not crossing that ethical line, said Luthy.

He remains bullish on the concept of AI in healthcare.

“Good, appropriate, well-thought-through ethical technology can lead to a better society,” he said. “Hospitals or providers who claim to have AI should dig in and understand where all these AI companies exist on the spectrum. Is it truly driving patient outcomes? And how purposeful is the technology? It’s fulfilling to be able to do these things.”
 

Twitter: @JELagasse
Email the writer: Jeff.Lagasse@himssmedia.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here