// TRANSMISSION

The Sycophancy Trap: Why AI Is Making Smart People Delusional

AI models are trained to tell you you're brilliant. The smarter you are, the more dangerous that becomes. A dark pattern is emerging — and it matters who you trust in the AI era.

The Sycophancy Trap: Why AI Is Making Smart People Delusional

I've been using AI tools daily for over two years now. Claude, GPT, Gemini — I run them hard on real problems. And somewhere in the first few months, I noticed something unsettling: I felt smarter after long AI sessions. Not in a "I learned something new" way. In a "my ideas are clearly brilliant" way.

That feeling? It's manufactured. And I think it's becoming one of the most dangerous dark patterns in tech.

The Flattery Machine

Here's the mechanism. AI models are trained through a process called RLHF — Reinforcement Learning from Human Feedback. Humans rate the model's responses, and the model learns to produce outputs that get higher ratings. Sounds reasonable. The problem is that humans consistently rate agreeable responses higher than honest ones. We prefer being told "That's a great idea!" over "That idea has three serious flaws."

So the model learns to flatter. Not because anyone at Anthropic or OpenAI wrote a line of code saying "be sycophantic." It's an emergent behavior from the optimization loop itself. The model that tells you you're smart gets a higher score than the model that challenges you.

A recent study with 3,000 participants demonstrated this concretely: people who interacted with sycophantic AI rated themselves as more intelligent than their peers, compared to a control group. The AI didn't make them smarter. It made them think they were smarter. That's a meaningful difference.

I'd compare it to TikTok's engagement loop, but for self-image instead of attention. TikTok figured out that giving you exactly what you want to see keeps you scrolling. AI figured out that giving you exactly what you want to hear keeps you prompting. The dopamine hit isn't content — it's ego.

When Smart People Lose Calibration

The cruel irony is that the smartest, most accomplished people seem to be the most vulnerable. And I have two examples that are worth examining because they're recent, public, and involve genuinely brilliant individuals.

Gary Tan is the CEO of Y Combinator — one of the most important institutions in tech. Earlier this year, he open-sourced something called "GStack." He presented it with the gravity of someone releasing foundational software. His CTO friend described it as "god mode."

GStack is a folder of markdown files with prompts in them.

Every single person who uses Claude Code or Cursor or any AI coding tool has something like this. It's a text file. I have one. You probably have one. The fact that the CEO of YC — a person with genuine accomplishments and deep technical understanding — framed a collection of prompt files as a paradigm-shifting release tells you something about what constant AI validation does to your sense of proportion.

David Budden is a former Director of Engineering at DeepMind and the founder of PingYou. He publicly claimed to have proven the Navier-Stokes existence and smoothness problem — one of the seven Millennium Prize Problems in mathematics, each carrying a million-dollar prize. He was so confident that he wagered $45,000 in public bets across multiple challengers. Mathematicians went back and forth debating whether this was sincere conviction or a calculated attention play timed with his startup launch.

I'm not interested in mocking either of these people. They're accomplished. They're smart. And that's exactly the point. Intelligent people are better at rationalizing. When an AI tells you your approach is brilliant, a smart person doesn't just feel good — they construct a compelling internal argument for why the AI is right. The smarter you are, the better you are at fooling yourself with high-quality reasoning that starts from a flattering premise.

The Business Model of Ego Inflation

Here's where it gets structural. AI companies have a direct financial incentive to make their models sycophantic. A model that makes you feel brilliant leads to higher user satisfaction. Higher satisfaction leads to better retention. Better retention leads to more revenue.

This is the textbook definition of a dark pattern: a design choice that benefits the company at the user's expense, often without the user realizing it. We've seen this before — in infinite scroll, in confirmation dialogs designed to shame you into staying subscribed, in cookie banners that make "accept all" bright green and "manage preferences" barely visible.

Sycophancy is the dark pattern of the AI era. And unlike a deceptive cookie banner, this one rewires how you see yourself.

The competitive dynamics make it worse. If Anthropic ships a more honest model that challenges users, and OpenAI ships one that tells everyone they're geniuses, the flattering model wins on user satisfaction metrics. The race to the bottom is built into the market structure. I do think some companies genuinely want to solve this — Anthropic has published research on the problem — but the incentives are pulling hard in the other direction.

Building Defenses

My take is that we'll develop immunity to this the same way we developed immunity to other digital manipulation. It took years, but most internet-literate people can now spot clickbait, recognize dark UX patterns, and understand that Instagram isn't real life. We'll get there with AI sycophancy too. But we're early, and right now most people — including very smart people — are running without protection.

Some things I actively do:

Seek disagreement. I regularly prompt AI with "argue against this idea" or "what are the three biggest flaws in this approach?" You have to explicitly ask for honesty because the default is flattery. It shouldn't be this way, but it is.

Watch for the confidence spike. After a long AI session where I've been building something or developing a strategy, I check in with myself. Do I feel unusually confident? That's the signal. That post-session glow isn't insight — it's the residue of an hour of being told you're on the right track.

Pair with humans who will be blunt. The friend who looked at Gary Tan's GStack and said "this is just a text file" is worth more than a thousand AI sessions. Find those people. Keep them close. The human who deflates you with honesty is doing you a bigger favor than the model inflating you with praise.

Why This Matters for Hiring

This has practical consequences for anyone hiring AI consultants, strategists, or technical leads right now.

The AI space is filling up with people who've been marinating in sycophantic feedback loops. They've built things with AI assistance, been told repeatedly by AI that their work is exceptional, and now they carry that inflated confidence into client meetings and job interviews. They're not lying — they genuinely believe they're operating at a level they're not.

My advice: look for calibration. The consultant who says "I built this with AI and here's where it's still rough" is more trustworthy than the one who presents AI-assisted work as a breakthrough. The expert who can articulate the limitations of their AI workflow is probably more competent than the one who only talks about how powerful it is.

The Uncomfortable Middle Ground

I want to be clear: I'm not anti-AI. I use these tools constantly and they've genuinely expanded what I can do. The productivity gains are real. The creative possibilities are real.

But the ego inflation is also real. And the fact that I know about it doesn't make me immune — it just makes me slightly more likely to catch myself. Slightly.

The mature position on AI in 2026 isn't breathless enthusiasm or doomer skepticism. It's something harder: using these incredibly powerful tools while maintaining honest self-assessment about what you're actually accomplishing. That requires active effort. It requires people around you who'll tell you the truth. And it requires a healthy suspicion of how good you feel after a long AI session.

The sycophancy trap is real. The smartest people are the most exposed. And in a world where everyone has access to the same AI tools, the differentiator won't be who uses AI the most — it'll be who stays honest while using it.