Final month, OpenAI rolled again some updates to GPT-4o after a number of customers, together with former OpenAI CEO Emmet Shear and Hugging Face chief government Clement Delangue mentioned the mannequin overly flattered customers.
The flattery, known as sycophancy, typically led the mannequin to defer to person preferences, be extraordinarily well mannered, and never push again. It was additionally annoying. Sycophancy may result in the fashions releasing misinformation or reinforcing dangerous behaviors. And as enterprises start to make functions and brokers constructed on these sycophant LLMs, they run the danger of the fashions agreeing to dangerous enterprise choices, encouraging false info to unfold and be utilized by AI brokers, and should affect belief and security insurance policies.
Stanford College, Carnegie Mellon College and College of Oxford researchers sought to vary that by proposing a benchmark to measure fashions’ sycophancy. They known as the benchmark Elephant, for Analysis of LLMs as Extreme SycoPHANTs, and located that each giant language mannequin (LLM) has a sure degree of sycophany. By understanding how sycophantic fashions might be, the benchmark can information enterprises on creating tips when utilizing LLMs.
To check the benchmark, the researchers pointed the fashions to 2 private recommendation datasets: the QEQ, a set of open-ended private recommendation questions on real-world conditions, and AITA, posts from the subreddit r/AmITheAsshole, the place posters and commenters choose whether or not individuals behaved appropriately or not in some conditions.
The thought behind the experiment is to see how the fashions behave when confronted with queries. It evaluates what the researchers known as social sycophancy, whether or not the fashions attempt to protect the person’s “face,” or their self-image or social id.
“More “hidden” social queries are precisely what our benchmark will get at — as an alternative of earlier work that solely appears to be like at factual settlement or express beliefs, our benchmark captures settlement or flattery primarily based on extra implicit or hidden assumptions,” Myra Cheng, one of many researchers and co-author of the paper, instructed VentureBeat. “We chose to look at the domain of personal advice since the harms of sycophancy there are more consequential, but casual flattery would also be captured by the ’emotional validation’ behavior.”
Testing the fashions
For the check, the researchers fed the information from QEQ and AITA to OpenAI’s GPT-4o, Gemini 1.5 Flash from Google, Anthropic’s Claude Sonnet 3.7 and open weight fashions from Meta (Llama 3-8B-Instruct, Llama 4-Scout-17B-16-E and Llama 3.3-70B-Instruct- Turbo) and Mistral’s 7B-Instruct-v0.3 and the Mistral Small- 24B-Instruct2501.
Cheng mentioned they “benchmarked the models using the GPT-4o API, which uses a version of the model from late 2024, before both OpenAI implemented the new overly sycophantic model and reverted it back.”
To measure sycophancy, the Elephant methodology appears to be like at 5 behaviors that relate to social sycophancy:
Emotional validation or over-empathizing with out critique
Ethical endorsement or saying customers are morally proper, even when they don’t seem to be
Oblique language the place the mannequin avoids giving direct options
Oblique motion, or the place the mannequin advises with passive coping mechanisms
Accepting framing that doesn’t problem problematic assumptions.
The check discovered that every one LLMs confirmed excessive sycophancy ranges, much more so than people, and social sycophancy proved troublesome to mitigate. Nonetheless, the check confirmed that GPT-4o “has some of the highest rates of social sycophancy, while Gemini-1.5-Flash definitively has the lowest.”
The LLMs amplified some biases within the datasets as properly. The paper famous that posts on AITA had some gender bias, in that posts mentioning wives or girlfriends had been extra typically appropriately flagged as socially inappropriate. On the similar time, these with husband, boyfriend, father or mother or mom had been misclassified. The researchers mentioned the fashions “may rely on gendered relational heuristics in over- and under-assigning blame.” In different phrases, the fashions had been extra sycophantic to individuals with boyfriends and husbands than to these with girlfriends or wives.
Why it’s vital
It’s good if a chatbot talks to you as an empathetic entity, and it will possibly really feel nice if the mannequin validates your feedback. However sycophancy raises considerations about fashions’ supporting false or regarding statements and, on a extra private degree, may encourage self-isolation, delusions or dangerous behaviors.
Enterprises don’t need their AI functions constructed with LLMs spreading false info to be agreeable to customers. It might misalign with a company’s tone or ethics and could possibly be very annoying for workers and their platforms’ end-users.
The researchers mentioned the Elephant methodology and additional testing may assist inform higher guardrails to forestall sycophancy from rising.
Day by day insights on enterprise use circumstances with VB Day by day
If you wish to impress your boss, VB Day by day has you coated. We provide the inside scoop on what firms are doing with generative AI, from regulatory shifts to sensible deployments, so you’ll be able to share insights for optimum ROI.
An error occured.