Skip to content

Why your brain is the biggest bottleneck in AI productivity

And my $20 lesson on cognitive biases

It was 8:47 PM on a Tuesday night, many hours into debugging a deployment issue that I wanted to have solved in ten minutes. I had already phoned a friend — my partner (and resident backend engineer) had finished putting our kids to bed and came to help.

I was manually copy-pasting code snippets between ChatGPT and my GitHub repository, trying to diagnose why my landing page wouldn’t load. As I watched myself follow the same debugging pattern for the third time, I realized something uncomfortable: the AI tools weren’t the problem. My brain was.

I turned to him and apologized: “I should have paid the $20 for Claude Code.”

This wasn’t a technical failure. It was a cognitive one. And it’s one that all of us will fall for at some point.

I’d been so determined to build my lead magnet without adding any new subscriptions that I’d created the most expensive kind of free project possible – one that devoured hours of my time and started absorbing his as well.

What started as an AI experiment became something more valuable: a case study in how cognitive biases systematically undermine our AI productivity. The tools worked exactly as advertised. My mental models failed spectacularly.

The cognitive trap every AI user falls into

Here’s where my brain let me down: I had set up expectations for how this project was going to go. I had a vision of what was going to be the hard part and what would be easy and what was AI-enabled and what I had done before and so would be able to simply do again (this is the planning fallacy).

Loss aversion kicked in next. Having already invested time in my “free” approach, spending $20 felt like admitting defeat. So I doubled down, burning entire evenings rather than cutting my losses.

When the reality started deviating from the plan, I refused to shift my mental model. This is something I’ve done as a product manager. We have expectations for how something should be, and so don’t actually shift our development processes or our strategy when we get information that contradicts it. We try to fit the work to match the plan instead of adjusting the plan (real agile is HARD).

Content creation: When AI actually works

This was supposed to be the hard part. Creating compelling content from scratch typically takes me days of writing, editing, and refining. But within two hours, I had a complete lead magnet that actually sounded like me. Here’s how AI turned what should have been the most time-consuming phase into the easiest win.

I uploaded my entire newsletter archive into Lex and asked it to identify my core messaging pillars and suggest compelling lead magnet ideas (I’ve been paying for Lex for a while, so this wasn’t free, but it was no incremental cost). I didn’t use any of the ideas it proposed, but having that high-level perspective helped me land on “How to create a thriving team culture” – a core messaging pillar, aligned with my product operations focus and backed by dozens of relevant posts from my archive.

The key insight: I stayed in control of the strategic decisions while letting AI handle synthesis. This worked because my mental model was correct—I understood AI’s role as an amplifier, not a replacement.

Newsletter archive knowledge base with attached articles on product management and culture.
It’s really powerful to have an AI trained on all my past articles.

With the topic selected, I asked it to outline the key points for the lead magnet based on my existing content. This is where I hit my first snag.

The output was repetitive – the same concepts phrased multiple ways across different sections. I caught this immediately because I was actively thinking alongside the AI, not passively consuming its output. The moment you disengage your critical thinking, AI becomes expensive rather than productive.

Key newsletter themes: strategic product operations, product culture development, and human-centered change management
 While it came up with bland ideas, it was enough to inspire me with a direction.

After manually refining the outline, I asked it to create detailed structures for each subsection, specifying exactly what type of content should appear in each section to maximize reader value.

I tackled the content one section at a time, and since the tool had access to my entire newsletter archive, it did a decent job of pulling quotes and insights from my existing work.

Suggested article structure with five parts: hook, core problem, principle, action items, and teaser.
he AI excelled at creating a clear and useful template to fill in later.

However, in many cases it still sounded like AI. I had to push it further to get it to sound like me, and realized that specifying some topics or themes for each section helped it focus more. After a few lines of verbal abuse and cajoling, the result actually sounded like something I would write – not generic AI copy.

Screenshot showing repeated AI-generated phrases like 'no hallucinations' and user feedback such as 'make it more my voice' on a gradient background with the title 'The AI repetition game'.
I felt like a broken record, sending the same instructions over on repeat.

Finally, I asked it to weave in references to related articles throughout the guide, creating multiple pathways for readers to dive deeper into my content. The result was a lead magnet that didn’t just deliver value – it created a natural journey through my broader body of work.

The verdict on AI for content creation: When you feed it enough of your existing work, AI can genuinely write in your voice. The key is rigorous curation to eliminate redundancy and ensure each section serves a distinct purpose. And knowing when to lob some tactical verbal abuse at your writing partner. Two hours well spent – and a stark contrast to what came later.

Design: The quiet collapse of my mental models

Riding the high from the content creation success, I decided to push my experimentation further. And for a brief moment, I didn’t fall for the planning fallcy.

Instead of going with Lovable (a tool I’d used successfully before), I wanted to test Figma’s brand-new AI beta features. After all, this whole project was about testing the boundaries of what AI could do.

Chat exchange troubleshooting bold body text caused by Tailwind's prose-lg classes overriding global styles, with user stating 'it's still bold. fix'.
It claimed to fix an issue that was still staring me right in the face.

That experimental spirit bit me. Figma’s AI generated designs that looked nothing like my brand – tacky layouts that were hard to read. I was expecting Figma to perform better on the design front.

Cover page titled 'The Six Systems That Help Product Teams Thrive' with colorful gradient background, logo at the top, and a table of contents listing topics for product team improvement.
This monstrosity was the point at which I realized I really had to pivot to a new tool.

Those overdone designs were paired with broken HTML that even my non-developer eyes could spot as problematic.

After realizing that Figma’s beta was going to keep making the same mistakes over and over again, even as I asked it to fix them, I switched to Lovable. Look at that—actual rational decision-making! I abandoned my sunk costs in Figma instead of throwing good time after bad.

The lesson wasn’t about tool choice. It was about recognizing when your mental model isn’t working and being willing to abandon it quickly.

The difference was immediate – cleaner designs, proper code structure, and it actually understood my brand guidelines.

Design brief message with color palette swatches and direction to use Raleway font for a static lead magnet webpage that feels professional, creative, and contemporary.
 I’ll admit that starting from scratch also helped me write a better prompt for my second round.

Even so, it wasn’t perfect. I battled broken images and deleted copy here too. But the difference was the mistakes weren’t repeating themselves. I was making steady progress towards my goal.

Being comfortable in GitHub made it easy to peek at the code and make small tweaks myself, which saved both time and credits. This revealed a critical insight: even basic coding literacy dramatically amplifies the value of AI development tools. Sometimes fixing a color or font size manually was faster than explaining the change to the AI, but more importantly, being able to read the HTML and CSS meant I could identify issues, understand what the AI was actually building, and make targeted requests for improvements.

Of course, this ability to jump in and out of GitHub to see just enough also led to overconfidence about my ability to deploy the page. Even as I avoided one mental fallacy, I was falling straight into another.

Deployment headaches: When free gets expensive

By the time I reached deployment, I was already trapped and didn’t know it. I’d invested hours in this low cost approach and it had been working great. The GitHub Pages setup looked straightforward. How hard could it be to upload a static page?

This is where the planning fallacy kicked in with full force.I didn’t even consider paying for tools at this point because this was the “easy” part.

After realizing it wasn’t easy, switching to paid tools now felt like admitting my entire approach had been wrong. So instead of cutting my losses, I doubled down.

The first error message appeared immediately: “404 – Page not found.” Probably just a configuration issue. I’d have this sorted in 20 minutes.

An hour later, I was still staring at the same error.

ChatGPT kept telling me to check my GitHub Pages settings, which I’d already confirmed were correct. Then it would suggest checking my repository structure. Then back to the settings again. I felt like I was being led in circles, checking the same three files and settings over and over again.

“Have you enabled GitHub Pages in your repository settings?”

Yes, I had.

“Make sure your main branch is selected.”

It was.

“Check if your index.html is in the root directory.”

For the fifth time, yes.

GitHub Pages build and deployment settings showing GitHub Actions selected with options for Jekyll and Static HTML, and user message saying 'i did that still getting the error'.
Maybe the AI is playing a game with humans to see how many times it can get us to repeat something before we rage-quit?

Here’s where the fragmented AI ecosystem revealed its weakness. ChatGPT could only see the error messages I copy-pasted. GitHub’s AI couldn’t access my full codebase. Lovable couldn’t debug deployment issues.

I was playing telephone between three different AI tools, manually shuttling information back and forth like some kind of human API. Each tool gave me confident advice based on incomplete information.

But here’s the cognitive trap I fell into: instead of recognizing this as a systemic problem requiring a different approach, I kept believing the next copy-paste session would fix everything.

Error message showing 404 Not Found for missing CSS file at thrive.jennywanger.com in the product-team-thrive-guide assets directory.
At this point I was so dejected I just started throwing in error messages context-free.

Without a tool that could analyze my entire codebase simultaneously, I was debugging blind, fixing one issue only to uncover three more.

As the debugging marathon stretched into its third evening, I called my partner in for backup. He started walking through the debug settings with me when I realized I had fallen for a classic case of loss aversion.

I’d avoided paying $20 for Cursor or Claude’s coding tools because I didn’t want another subscription. “This should be the simple part,” I told myself. “Just get a few settings right and it’ll work.”

In that moment, watching my partner troubleshooting my self-inflicted problem (during his free time), the absurdity hit me. My cognitive biases had become more limiting than any AI tool’s technical constraints.

By hour five or six, the lesson was crystal clear: my “free” approach had become the most expensive possible solution. The $20 I’d refused to spend on proper AI coding tools had cost me several evenings and dragged my partner into my self-inflicted debugging nightmare.

Sometimes the best way to save money is to spend it.

Debugging my own mind

You probably already know about these cognitive biases. And you will fall for them too. Here’s what I’ll be doing to try and recognize it sooner next time:

The integration illusion: We expect AI tools to work together seamlessly, but current AI is like hiring five specialists who can’t communicate with each other. Recognizing this limitation upfront changes how I architect my AI workflows and create plans.

Your brain is still the bottleneck. Every tool would have produced garbage if I’d blindly accepted its first output. The content needed aggressive curation. The designs needed multiple iterations. The deployment needed human debugging. I review the output and edit twice as harshly.

Cognitive biases will sabotage your AI productivity. My planning fallacy and loss aversion cost me an entire weekend. I was so focused on avoiding a $20 “loss” that I ignored the massive time cost. I’ve seen sunk cost fallacy, confirmation bias, and anchoring effects all pollute AI usage as well. I need to get better at recognizing where my time is going and value that better.

The expertise paradox: The more I know about a domain, the better AI works for me. But expertise also makes me overconfident about venturing into adjacent domains where AI might fail. I need to acknowledge that I only play a developer on TV and know my limits.

The consultant’s paradox

Here’s the thing that really gets me: I spot these exact patterns in my clients all the time.

Yet there I was at 8:47 PM, manually copy-pasting code between ChatGPT and GitHub, convinced that spending $20 on Cursor would somehow invalidate my entire approach.

The cognitive biases I can diagnose in a 30-minute client call? Apparently invisible when I’m the one experiencing them.

This isn’t unique to me. We’re all terrible at seeing our own blind spots. Product managers who preach user-centricity build features their users hate. Engineers who write great tests still sometimes take down prod. And product consultants who write about cognitive traps walk straight into them.

The real work is seeing the positive side of these mistakes. I can sit across from a frustrated team lead and say, “I know exactly how you got here, because I just did the same thing last month.” There’s something powerful about admitting you’re still learning, still failing, still human.

Will I build my next lead magnet with AI? Absolutely. Will I probably fall for a different cognitive bias next time? Almost certainly.

* This article includes affiliate links