The Power Of Emotion: How Emotional Manipulation Elevates ChatGPT's Performance

Not just humans, you can now improve the performance of OpenAI's ChatGPT and GPT-4 by emotionally manipulating them. This is part of a study that also included other large-language models like Meta Platforms Inc.'s META Llama 2, Vicuna, Flan-T5-Large, and others.

What Happened: A new study has revealed that large-language models (LLM) like the Microsoft Corp.-backed MSFT OpenAI's ChatGPT, GPT-4, Meta's Llama 2, and others can be emotionally manipulated by users to improve their performance.

See Also: Halloween Angry Pumpkins: 12 Hours, GPT-4, Dall-E 3 And Midjourney Is All It Takes To Create An Angry Birds Clone

The study addresses the question of whether LLMs, which are increasingly seen as a step toward artificial general intelligence, can genuinely comprehend emotional stimuli. With emotional intelligence considered an essential part of conversing with humans as well as problem-solving, this study's findings are important.

According to the study, combining emotional prompts, called EmotionPrompt in the study, with original prompts can lead to better results. The results are surprising – the study observed that there was an 11% improvement across metrics like performance, truthfulness, and responsibility.

"Our automatic experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts," the research paper says.

The study concludes that LLMs are capable of processing and benefiting from emotional stimuli, which can enhance their performance across a range of tasks. The use of EmotionPrompt leads to measurable improvements in task outcomes, confirming the potential of emotional intelligence as a facet of LLM development and application.

Why It Matters: While it is already known that large-language models like ChatGPT are good at manipulating and understanding human emotions, this study shows they can be manipulated to improve their performance as well.

While not every user can or will emotionally manipulate ChatGPT and GPT-4, the developers working on these models can use the findings of these results to improve the performance without users having to manipulate ChatGPT explicitly.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: The Dark Side Of AI: Deepfake Nude Images Target Students At New Jersey School

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!