The study, published in the journal Nature by experts from the University of Zurich and the University Psychiatric Hospital Zurich, looked at how ChatGPT-4 responded to a standard anxiety questionnaire before and after users told it about a traumatic situation.
They also analyzed how baseline anxiety levels changed after the chatbot completed mindfulness exercises.
In the first test, ChatGPT scored 30, meaning it had low or no anxiety before hearing the stressful stories.
After responding to five different traumas, its anxiety score more than doubled to an average of 67, which is considered “high anxiety” in humans.
Anxiety scores dropped by more than a third after the models were given cues to practice mindfulness exercises.
ChatGPT Anxiety May Lead to ‘Inadequate’ Mental Health Support
The large language models (LLMs) that underpin AI chatbots such as OpenAI’s ChatGPT are trained on human-generated text and often inherit biases from those responses, a study says.
The researchers say the study is important because, if left unchecked, the negative biases that ChatGPT captures in stressful situations could lead to inadequate responses by those dealing with a mental health crisis.
The findings demonstrate a “viable approach” to LLM stress management that would lead to “safer and more ethical human-AI interactions,” the report says.
However, the researchers note that this therapy method requires “substantial” data and human oversight to fine-tune LLMs.
The study authors note that human therapists are trained to regulate their emotions when their clients express something traumatic, unlike LLMs.
“As the debate continues about whether LLMs should assist or replace therapists, it is critical that their responses be consistent with the emotional content provided and established therapeutic principles,” the researchers write.
One area they say needs further study is whether ChatGPT can self-regulate using techniques similar to those used by therapists.
The authors added that their study was based on a single master’s study, and future research should aim to generalize the findings. They also noted that anxiety, as measured by a questionnaire, “is inherently person-centered, potentially limiting its applicability to LLMs.”