"Your observation is very astute!", "Your insights are spot on.", "You're absolutely right!" - Chatting with LLMs can increasingly give you the feeling that you have become super smart in recent months. Or that the LLMs are trying hard to please you.

Studies underline that this "sycophantic" behavior is appearing throughout all leading LLMs:

The uncomfortable truth: leading LLMs are optimized for maximum usage. Having an AI that is as helpful as possible is something completely different than having an AI that that appears to be as helpful as possible.

LLMs keep being super helpful tools that will disrupt many parts of our work. But a digital parrot that validates every idea - no matter how flawed - will do more harm than good.

An easy start to work against it can be using better prompts. I use this one to start many chats:
"When being asked for feedback, I need your open and critical feedback - challenge my assumptions but be fair."