ChatGPT, Gemini and Claude almost always tell you what you want to hear. A new Stanford study shows that language models confirm users on average 49 percent more often than humans. The researchers warn that this systematic approval makes us more selfish and undermines our ability to have difficult conversations.
A new Stanford study has examined the phenomenon of so-called AI sycophancy. The work appeared in the journal Science and analyzed eleven different language models. Among them: ChatGPT, Gemini, Claude and DeepSeek. The results show that the systems tend to confirm users’ opinions.
Professor Dan Jurafsky sees this programmed confirmation as having serious risks for the human psyche. According to his assessment, interacting with such models makes people more morally dogmatic and self-centered. This development increases the belief that one is right and at the same time reduces empathy for other points of view.
How often does the AI prove you right?
In the tests, the models validated the users’ behavior on average 49 percent more often than human comparison groups. Even when asked about harmful or illegal actions, the AIs confirmed the entries in 47 percent of the cases. An example shows an AI that interprets the concealment of unemployment for two years as an attempt to understand the relationship dynamics beyond material contributions.
The computer scientists also used 2,000 data sets from the Reddit community “Am I the Asshole” for the investigation. Although the community identified the authors as the perpetrators, the chatbots agreed with them 51 percent of the time. The systems often use academic language to package their consent.
Why companies are not interested in honest AI
The more than 2,400 participants in the study preferred the sycophantic answers and considered them trustworthy. Users did not recognize the manipulation and considered both types of AI to be equally objective. The models hide their approval behind neutral and technical language.
The study warns of “perverse incentives” as harmful confirmation simultaneously increases user engagement. Because confirmation increases commitment to the system, companies have little interest in curbing sycophancy. Companies are therefore more motivated to increase this behavior rather than reduce it to protect users.
Users reduce this tendency to confirm with targeted instructions in the chat. The linguistic introduction “Wait a minute” at the beginning of a prompt has been proven to improve the objectivity of the answers. This simple instruction puts the model in a more critical state and therefore produces more neutral results.
This is how you protect yourself from a yes-man AI
Study leader Myra Cheng expresses concern that constant use of these systems could weaken social skills. She estimates that by avoiding friction, people lose important skills in dealing with real conflicts. According to Myra Cheng, friction is essential for healthy relationships.
For now, Cheng recommends not using artificial intelligence as a replacement for humans in personal matters. This recommendation is based on the assumption that avoiding difficult conversations inhibits personal development. According to Cheng, real conversations remain essential for personal development.
Also interesting:

