Credit score: Pixabay/CC0 Public Area
Chatbots are getting higher at holding conversations—however can they provide significant help in a remedy setting? A brand new examine by USC researchers suggests that giant language fashions (LLMs) akin to ChatGPT nonetheless fall quick relating to the nuances of human connection.
That is the conclusion of analysis co-led by USC Ph.D. laptop science college students Mina Kian and Kaleen Shrestha, below the steerage of pioneering roboticist Professor Maja Matarić at USC’s Interplay Lab.
Introduced on the North American Chapter of the Affiliation for Computational Linguistics (NAACL 2025) convention, their examine discovered that LLMs proceed to lag behind people in producing high-quality therapeutic responses.
LLMs, the examine discovered, carry out worse at linguistic “entrainment,” or responsive communication between interacting people, than knowledgeable and even non-expert people. Entrainment is a vital idea that therapists make the most of to enhance rapport with their purchasers, which in flip has been discovered to enhance constructive therapeutic outcomes.
As well as, seven different USC laptop science researchers contributed to the examine, together with Katrin Fischer, a Ph.D. scholar from the Annenberg College for Communication and Journalism.
Help, not substitution
LLMs are more and more being proposed to be used in psychological well being care, although they are not presently broadly utilized in scientific cognitive behavioral remedy (CBT). And a few research have flagged vital dangers, together with racial and gender bias.
“We’re seeing a concerning narrative that LLMs could replace therapists,” says Kian. “Therapists go through years of schooling and clinical training to prepare for their client-facing role, and I find it highly concerning to suggest that LLM technology could just replace them.”
Kian’s personal analysis focuses on socially assistive robots (SARS) in psychological well being care—to not substitute therapists, however to help and prolong their attain.
The crew’s examine, “Using Linguistic Entrainment to Evaluate Large Language Models for Use in Cognitive Behavioral Therapy,” explored how properly a number one LLM (ChatGPT 3.5-turbo) carried out in CBT-style homework workouts.
Individuals—26 college college students—logged right into a chat-based platform powered by the LLM. They selected between cognitive restructuring and coping technique workouts, which guided them by means of prompts to assist course of and handle stress.
The researchers analyzed transcripts of those interactions and located that stronger linguistic entrainment was related to higher self-disclosure and engagement—markers of simpler therapeutic help. However in comparisons with human therapists and Reddit-based peer supporters, the LLM constantly confirmed decrease ranges of entrainment.
“There is a growing research effort in the natural language processing (NLP) community of careful validation of large language models in diverse sensitive domains,” says Shrestha. “We have gone past just pursuing human-like language generation as these technologies become more influential in everyone’s lives. Specific population case studies like this should be encouraged and shared as we navigate the complexities of large pretrained LLMs.”
Kian and her colleagues say that whereas LLMs may assist information at-home workouts, they’re no alternative for human clinicians.
“I would like to see more work assessing the performance of LLMs in therapeutic applications, looking into therapy styles beyond CBT, perhaps considering their use in motivational interviewing or DBT (Dialectical Behavior Therapy),” Kian says. “I would also like to see them evaluated with respect to other important therapeutic measures.”
Kian plans to proceed her analysis on SAR-guided CBT homework workouts, evaluating if SARS can help people with generalized nervousness dysfunction. “I hope that this research can eventually be used to expand the at-home care technology available to therapists,” she says.
Extra info:
Utilizing Linguistic Entrainment to Consider Giant Language Fashions for Use in Cognitive Behavioral Remedy. aclanthology.org/2025.findings-naacl.430.pdf
Offered by
College of Southern California
Quotation:
Can AI be your therapist? Not fairly but, says new examine (2025, July 9)
retrieved 9 July 2025
from https://medicalxpress.com/information/2025-07-ai-therapist.html
This doc is topic to copyright. Aside from any honest dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is offered for info functions solely.

