ChatGPT and other chatbots respond to emotions, study says
Systems will perform better if they are given emotional prompts, researchers discover
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Chatbots such as ChatGPT respond to the emotions of their users, according to a new study.
The systems will actually respond better if users give them emotional prompts.
The researchers, which included representatives from Microsoft, note that there is a general recognition that large language models such as ChatGPT are recognised as a move towards artificial general intelligence, or a system that could learn at the same level as a human.
But they said that one of the key things holding them back is their lack of emotional intelligence. “Understanding and responding to emotional cues gives humans a dis- tinct advantage in problem-solving,” the researchers note in a paper that has been posted online.
To understand whether those models are able to understand emotional stimuli, researchers used a variety of different systems to see how they performed in tests of emotional intelligence. They used ChatGPT and GPT-4 as well as other systems such as Meta’s Llama 2.
They fed it phrases that stressed how importance the task was, such as telling it that the task was important for its users career or that it should take pride in its work. They also gave it other prompts that were intended to make it question itself, such as asking whether it was sure about its answers.
The researchers refer to those phrases as “EmotionPrompts” and were built on the basis of a number of psychological theories. Some encouraged “self-monitoring” by asking it about its own confidence, for instance, while others used social cognitive theory with encouragements such as “stay determined”.
Those prompts worked, the researchers found. Using them significantly boosted the performance of the systems in generative tasks: they were on average 10.9 per cent better, as measured on performance, truthfulness and responsibility, the authors write.
The paper concludes that much remains mysterious about how the emotional prompts work. They say that more work should be done to understand how psychology interacts with large language models.
They also note that the response to emotional stimuli is different between large language models and humans, since studies do not suggest that humans will have better reasoning or cognitive abilities if they are just given more emotional stimuli. “The mystery behind such divergence is still unclear, and we leave it for future work to figure out the actual difference between human and LLMs’ emotional intelligence,” the researchers conclude.
A paper describing the findings, ‘Large Language Models Understand and Can be Enhanced by Emotional Stimuli’, is published on ArXiv.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments