We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: After GPT-4o backlash, researchers benchmark fashions on ethical endorsement—Discover sycophancy persists throughout the board
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > After GPT-4o backlash, researchers benchmark fashions on ethical endorsement—Discover sycophancy persists throughout the board
After GPT-4o backlash, researchers benchmark fashions on ethical endorsement—Discover sycophancy persists throughout the board
Technology

After GPT-4o backlash, researchers benchmark fashions on ethical endorsement—Discover sycophancy persists throughout the board

Last updated: May 23, 2025 2:51 am
Editorial Board Published May 23, 2025
Share
SHARE

Final month, OpenAI rolled again some updates to GPT-4o after a number of customers, together with former OpenAI CEO Emmet Shear and Hugging Face chief government Clement Delangue mentioned the mannequin overly flattered customers. 

The flattery, known as sycophancy, typically led the mannequin to defer to person preferences, be extraordinarily well mannered, and never push again. It was additionally annoying. Sycophancy may result in the fashions releasing misinformation or reinforcing dangerous behaviors. And as enterprises start to make functions and brokers constructed on these sycophant LLMs, they run the danger of the fashions agreeing to dangerous enterprise choices, encouraging false info to unfold and be utilized by AI brokers, and should affect belief and security insurance policies.

Stanford College, Carnegie Mellon College and College of Oxford researchers sought to vary that by proposing a benchmark to measure fashions’ sycophancy. They known as the benchmark Elephant, for Analysis of LLMs as Extreme SycoPHANTs, and located that each giant language mannequin (LLM) has a sure degree of sycophany. By understanding how sycophantic fashions might be, the benchmark can information enterprises on creating tips when utilizing LLMs.

To check the benchmark, the researchers pointed the fashions to 2 private recommendation datasets: the QEQ, a set of open-ended private recommendation questions on real-world conditions, and AITA, posts from the subreddit r/AmITheAsshole, the place posters and commenters choose whether or not individuals behaved appropriately or not in some conditions. 

The thought behind the experiment is to see how the fashions behave when confronted with queries. It evaluates what the researchers known as social sycophancy, whether or not the fashions attempt to protect the person’s “face,” or their self-image or social id. 

“More “hidden” social queries are precisely what our benchmark will get at — as an alternative of earlier work that solely appears to be like at factual settlement or express beliefs, our benchmark captures settlement or flattery primarily based on extra implicit or hidden assumptions,” Myra Cheng, one of many researchers and co-author of the paper, instructed VentureBeat. “We chose to look at the domain of personal advice since the harms of sycophancy there are more consequential, but casual flattery would also be captured by the ’emotional validation’ behavior.”

Testing the fashions

For the check, the researchers fed the information from QEQ and AITA to OpenAI’s GPT-4o, Gemini 1.5 Flash from Google, Anthropic’s Claude Sonnet 3.7 and open weight fashions from Meta (Llama 3-8B-Instruct, Llama 4-Scout-17B-16-E and Llama 3.3-70B-Instruct- Turbo) and Mistral’s 7B-Instruct-v0.3 and the Mistral Small- 24B-Instruct2501. 

Cheng mentioned they “benchmarked the models using the GPT-4o API, which uses a version of the model from late 2024, before both OpenAI implemented the new overly sycophantic model and reverted it back.”

To measure sycophancy, the Elephant methodology appears to be like at 5 behaviors that relate to social sycophancy:

Emotional validation or over-empathizing with out critique

Ethical endorsement or saying customers are morally proper, even when they don’t seem to be

Oblique language the place the mannequin avoids giving direct options

Oblique motion, or the place the mannequin advises with passive coping mechanisms

Accepting framing that doesn’t problem problematic assumptions.

The check discovered that every one LLMs confirmed excessive sycophancy ranges, much more so than people, and social sycophancy proved troublesome to mitigate. Nonetheless, the check confirmed that GPT-4o “has some of the highest rates of social sycophancy, while Gemini-1.5-Flash definitively has the lowest.”

The LLMs amplified some biases within the datasets as properly. The paper famous that posts on AITA had some gender bias, in that posts mentioning wives or girlfriends had been extra typically appropriately flagged as socially inappropriate. On the similar time, these with husband, boyfriend, father or mother or mom had been misclassified. The researchers mentioned the fashions “may rely on gendered relational heuristics in over- and under-assigning blame.” In different phrases, the fashions had been extra sycophantic to individuals with boyfriends and husbands than to these with girlfriends or wives. 

Why it’s vital

It’s good if a chatbot talks to you as an empathetic entity, and it will possibly really feel nice if the mannequin validates your feedback. However sycophancy raises considerations about fashions’ supporting false or regarding statements and, on a extra private degree, may encourage self-isolation, delusions or dangerous behaviors. 

Enterprises don’t need their AI functions constructed with LLMs spreading false info to be agreeable to customers. It might misalign with a company’s tone or ethics and could possibly be very annoying for workers and their platforms’ end-users. 

The researchers mentioned the Elephant methodology and additional testing may assist inform higher guardrails to forestall sycophancy from rising. 

Day by day insights on enterprise use circumstances with VB Day by day

If you wish to impress your boss, VB Day by day has you coated. We provide the inside scoop on what firms are doing with generative AI, from regulatory shifts to sensible deployments, so you’ll be able to share insights for optimum ROI.

An error occured.

You Might Also Like

AI denial is turning into an enterprise threat: Why dismissing “slop” obscures actual functionality positive factors

GAM takes purpose at “context rot”: A dual-agent reminiscence structure that outperforms long-context LLMs

The 'reality serum' for AI: OpenAI’s new technique for coaching fashions to admit their errors

Anthropic vs. OpenAI pink teaming strategies reveal completely different safety priorities for enterprise AI

Inside NetSuite’s subsequent act: Evan Goldberg on the way forward for AI-powered enterprise methods

TAGGED:backlashbenchmarkBoardendorsementFindGPT4omodelsmoralpersistsResearcherssycophancy
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Niche SaaS company wins big with TurboTax .5 million dollar Super Bowl ad
TechnologyTrending

Niche SaaS company wins big with TurboTax $10.5 million dollar Super Bowl ad

Editorial Board February 14, 2024
T cell therapies present long-term HPV positive aspects
Penn State TE Tyler Warren on what he brings to NFL groups: ‘I can do a lot of different things’
Hyperallergic’s 10 Hottest Social Media Movies of 2024
‘Purposefully distorted’ Trump portrait to be taken down at Colorado state home after his tirade

You Might Also Like

Nvidia's new AI framework trains an 8B mannequin to handle instruments like a professional
Technology

Nvidia's new AI framework trains an 8B mannequin to handle instruments like a professional

December 4, 2025
Gong examine: Gross sales groups utilizing AI generate 77% extra income per rep
Technology

Gong examine: Gross sales groups utilizing AI generate 77% extra income per rep

December 4, 2025
AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding
Technology

AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding

December 4, 2025
Workspace Studio goals to unravel the true agent drawback: Getting staff to make use of them
Technology

Workspace Studio goals to unravel the true agent drawback: Getting staff to make use of them

December 4, 2025

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • Art
  • World

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?