Seeking a Sounding Board? Beware the Eager-to-Please Chatbot. – The New York Times
Seeking a Sounding Board? Beware the Eager-to-Please Chatbot. – The New York Times
For almost as long as A.I. chatbots have been publicly available, people have enlisted them for interpersonal advice — for help drafting breakup texts, giving parenting advice, deciding who was in the right after a fight.
One of the main draws is that it feels objective: “The bot is giving me responses based on analysis and data, not human emotions,” one user told the The New York Times in 2023. But results of a new study, which were published Thursday in the journal Science, show chatbots are anything but impartial referees.
The researchers found that nearly a dozen leading models were highly sycophantic, taking the users’ side in interpersonal conflicts 49 percent more often than humans did — even when the user described situations in which they broke the law, hurt someone or lied.
Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.

