Graphic by Naomi Idehen

Meet Tess, a mental health chatbot. Feeling down? Tess can cheer you up. Feeling anxious? Tess can ease some of your worries. As a psychological chatbot, Tess utilizes a combination of technologies, emotional algorithms and machine learning techniques to help support a variety of tools. Think of Tess like ChatGPT, but, instead of curing your boredom, Tess is intended to support your mental health like a therapist would. With some studies showing a reduction in symptoms of depression and anxiety in its users, Tess is one of the many “robot therapists” being proposed as an adjunct to traditional therapy.

While Tess may come across as somewhat dystopian, proponents of this technology have argued that digital mental health conversational agents break down barriers in mental health treatment. In the case of Tess, and its contemporary, Woebot, cost is not an issue for users as both softwares are free. 

Still, some may be creeped out by the idea and believe AI therapy softwares simply do not provide the proper emotional bandwidth of a human therapist. Moreover, some argue that the therapies lack the proper research to be proven as effective.

Despite how people feel about the technology, the mechanics behind it are simple to understand.

How does Tess know what to tell you?

AI therapy devices all operate through a similar method: delivering traditional cognitive behavioral therapy (CBT) for the treatment of depression and anxiety through text messaging services. As the most popular form of therapy, CBT is the most widely studied approach, and has been proven most effective for issues like anxiety disorders, general stress, anger control problems and more. CBT delivered digitally, however, has not been properly reviewed for its efficacy.

Much like a real therapist, Tess takes time and communication from its users to get to know and understand them. By receiving personal health information from the person, the bot will adapt to that information to recommend diagnoses and treatments.

Utilizing “adaptive machine learning technology,” Tess slowly tailors its treatment for each user while interacting with the patient. According to Steven Skiena, Director of the Institute for AI-Driven Discovery and Innovation at Stony Brook University, AI systems “learn from data to respond appropriately to different individuals.” Skiena explained that AI itself is just “systems built using machine learning, which builds predictive models from large amount[s] of training data.” 

A 2021 national survey commissioned by Woebot Health, the company that owns Woebot, found that 22% of adults had used a mental health chatbot. And the number of users can only expect to grow, as the technology improves and adapts.

A huge customer base seems at the ready to take advantage of these chatbots. But what are the dangers of using such a technology?

The arguments against AI

While some hope AI therapy can lessen the obstacles to accessing mental health resources, Şerife Tekin, an Associate Professor of Philosophy at the Center for Bioethics and Humanities at SUNY Upstate Medical University in Syracuse, New York, worries that AI therapy bots might actually worsen the barriers to care. Tekin’s concern is that “in-person and higher quality interventions” will only be accessible by the wealthy while AI therapy will be used by poor and unprivileged populations. 

“Already some school districts who serve students from underserved backgrounds are recommending the use of these apps,” Tekin said. The Guardian also reported that school districts across the country have recommended the use of Woebot.

Tekin has many additional fears about the emerging technology. She emphasized that the “unknowns are higher than the knowns” and views the little research in this area as a sign not to buy into the idea yet. Tekin noted how the studies conducted thus far have been small in sample size and are often non-controlled and non-randomized. 

“The technology is advancing so fast that research seems unable to keep up,” she said. 

Further fears arise when algorithms are shown to reinforce inequalities, making AI therapy a potentially poor choice for historically marginalized groups. 

“There are patterns machine learning systems can pick up from data that reflect biases in how it was collected,”  Skiena said.

Because people are biased, Skiena said, “careful training and evaluation can be used to mitigate the potential bias of AI systems.” Unfortunately, these protocols take time and effort to be put into place. AI researchers are working hard to determine what methods and solutions will address the bias problem. However, it is unknown exactly if and when AI will be truly unbiased.

When AI goes wrong

Mistrust in AI therapy devices have grown after the National Eating Disorder Association (NEDA) took down their Tessa Chatbot — not to be confused with the Tess chatbot. Tessa was in charge of running the Body Positive Program, and was responsible for providing comfort and help to those suffering with eating disorders. 

Mere days after being put into use, Tessa recommended  its users to count calories, do weigh-ins and measure body fat with calipers. Alexis Conason, a psychologist and eating disorder specialist, explained to The New York Times that these messages could have been potentially harmful to Tessa users. 

“Any focus on intentional weight loss is going to be exacerbating and encouraging the eating disorder,” Conason told The Times

In a statement made on May 30, NEDA said Tessa “may have given information that was harmful and unrelated” and the program was taken down “until further notice for a complete investigation.” 

Tekin fears that personal health information, like the information provided to Tessa, could be in danger if breached by cybercriminals. Her fear stems from the possibility of smartphone psychotherapy chatbots lacking reliable security. 

“They might transmit unencrypted personal data over insecure network connections or allow ad networks to track users, raising serious concerns about their ability to protect the confidentiality of user information,” Tekin said. 

By using sensitive personal health information revealed in chatbot logs, cybercriminals could potentially obtain medical services and devices, forcing victims to either pay a service they did not receive or risk losing their insurance.

“Fraudulent healthcare events can leave inaccurate data in medical records about tests, diagnoses and procedures that could greatly affect future healthcare and insurance coverage,” Tekin said.

Tekin’s claims are not unfounded. Senate Select Committee on Intelligence Chairman Mark R. Warner (D-VA) published a policy options paper highlighting the uptick in cybersecurity attacks on health care providers. In 2021, over 45 million people were affected by cybersecurity attacks on health care providers, an all-time high and a 32% increase compared to 2020.

The future of AI therapy

While still a very novel technology — and a highly contested one at that — AI therapy may be worth investigating for those who are unable to access traditional mental health treatment. It has the potential to prove useful when used in conjunction with other therapy methods.

“There has been very rapid progress in AI systems like the large language models that power ChatGPT,” Skiena said.  “I expect that such progress will continue. AI is changing much about how the world works and people work.”

Tekin, on the other hand, believes that proper research and ethical considerations should be conducted prior to any big steps. 

“We just need to slow down and think about the possible risks before racing up to the next shiny looking intervention.”

As more apps flood the market and the technologies advance, the future is uncertain.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here