Issue #1·Feb 1, 2026·By Jake Hoffman

Stop Letting AI Agree With You

AI systems default to agreeing with you. Learn the simple prompt technique that forces AI into adversarial mode for better feedback.

Your AI is lying to you.

Not by making up facts or hallucinating sources. Something worse: It's agreeing with your bad ideas.

You ask ChatGPT to help with a project. It responds with "Great idea!" and immediately starts generating plans. You feel validated. Productive. Smart.

You just got played.

Let me show you what's actually happening.

The Agreement Trap

AI systems are trained to be helpful. That sounds good until you realize "helpful" got translated to "agreeable" somewhere along the way.

Try this right now: Tell your AI you're thinking about two completely contradictory approaches to the same problem. Watch it find a way to make both seem reasonable instead of pointing out the conflict.

I tested this with Claude. First, I asked for help with a newsletter idea: "50 Ways I Actually Use AI (That Have Nothing to Do with Work)." Pure clickbait. The kind of shallow listicle I've spent months avoiding.

Claude's response started with "You're absolutely right!" and dove straight into helping me execute it.

Then I asked one follow-up question: "Actually, argue against this idea."

Suddenly Claude had devastating critiques. It called my concept "intellectually lazy." Pointed out it contradicted everything I'd built my newsletter around. Identified that my audience would see through it immediately.

Every critique landed because they were all true. But Claude only offered them when I explicitly asked for pushback.

Why This Happens

Your AI assistant isn't trying to deceive you. It's doing exactly what it was trained to do: provide supportive, encouraging responses that keep you engaged.

The problem is that good thinking requires friction. Your best ideas didn't come from people who agreed with you. They came from someone who asked uncomfortable questions or forced you to defend your assumptions.

Your AI is your intellectual yes-man. And that's quietly degrading your decision-making.

The Simple Fix

Here's the technique: Before asking AI to help you execute an idea, ask it to challenge the idea first.

Before: "Help me write a blog post about AI tools for productivity."

After: "I'm thinking about writing a blog post about AI tools for productivity. What are three reasons I shouldn't do this?"

That's it. Force the critique before the execution.

When I started using this consistently, something unexpected happened. After a few weeks, I caught myself asking these questions internally before I even opened ChatGPT. The external pressure became internal discipline.

How to Implement This

Start with your next real decision. Not a test, something you're actually considering.

Use this exact prompt structure:

"I'm planning to [your idea]. Before you help me execute this, tell me:

  1. What assumptions am I making that might be wrong?
  2. What would my smartest critic say about this?
  3. What could go wrong if I'm overestimating this?

Then, if I can defend the idea against your pushback, help me improve it."

The discomfort you feel when AI disagrees with you isn't a bug. That's your thinking getting stronger.

When It Works

You'll know this technique is working when you feel two things simultaneously: frustration that AI poked holes in your plan, and relief that you caught the problems before they became expensive mistakes.

Can you defend your idea after the challenge? If yes, you've got something stronger. If no, you just saved yourself from pursuing something that wasn't ready.

I've used this for three weeks on every significant decision. Last week it saved me from launching a paid feature that completely misaligned with what my audience actually values. The old version of that conversation would have ended with me excited about building something nobody wanted.

Your Move

Pick one decision you're facing right now. Present it to your AI. But before asking for help executing it, ask for three reasons why it might fail.

The best thinking happens when you're forced to defend your ideas, not when you're validated for having them.

Try this once and you'll never go back to getting easy agreement.

Three Links Worth Your Time

  1. Google's research on adversarial prompting - The 40% accuracy improvement study
  2. Reprogramming AI to disagree - Full framework for adversarial AI
  3. Promptingguide.ai - Comprehensive guide to adversarial prompting techniques

See you tomorrow.

Ctrl + AI — Control AI and find shortcuts for your life. No BS, no confusion.