Skip to content

What AI can’t—and shouldn’t—do for product managers

Protecting human connections from the AI-ification of everything

This article was originally posted on Mind the Product on April 24, 2025.

AI is transforming every aspect of product management. However, these solutions threaten to make building great products harder.

In a Slack conversation about automating user research, we enthusiastically discussed how AI could handle scheduling, recording interviews, and taking notes. 

We excitedly explored taking it further and further into the user research lifecycle. Little did I know that this enthusiasm would lead to a gentle scolding from none other than Teresa Torres (I’ll share more on that later).

AI is going to completely change how we do product management. At the same time, we have to draw the line somewhere. As product people, our primary job is to understand user and stakeholder needs. We can’t outsource that to AI. 

All creativity and no busywork

In February, I hosted a webinar on AI in product ops. We showcased three demos, including my own product blueprint bot, and the potential is mind-blowing. It became crystal clear that we’re just scratching the surface of how AI can supercharge product teams.

These demos highlighted ways for AI to enhance product management productivity:

  • Tools that help categorize and pattern-match customer feedback
  • Bots that can create well-structured documents from meeting transcripts
  • Agents that can answer complex technical questions about how a product works

The allure of automation and efficiency is undeniable. Who wouldn’t want a job that’s all creativity and no busywork? AI promises to take care of the mundane tasks, freeing us up to focus on the high-level, strategic thinking that drew us to product management in the first place.

We cannot forget: great products are built on human connection. We build products for real people, which requires deep empathy and understanding. 

Be mindful of where we deploy AI and ensure it’s enhancing our ability to connect with users and stakeholders, not creating barriers.

AI and user research

Fully automating user research sounds alluring. AI tools promising to handle everything from creating studies, allowing you to interview fake users, and synthesizing insights are popping up everywhere

Even I got caught up in the excitement, until Teresa Torres brought me back down to earth. She argues that the work we do to synthesize our interviews is what really matters – that’s how we get an understanding of what happened in the interview. Reading someone else’s report is never the same as processing it yourself.

A screenshot from Slack of a conversation between Jenny and Teresa.

Jenny Wanger
I would also make sure you aren't just looking at feature feedback – the value of continuous discovery is often more in understanding users and their general disposition/needs. If you could get AI to make an interview snapshot, that would be great.

Teresa Torres
@Jenny Wanger the value of an interview snapshot is the synthesis work that goes into creating one. I wouldn’t outsource this to AI. Like @[blurred name] shared, I like to use AI as a team member that contributes a perspective, rather than an outsourced team that does the work.

Jenny Wanger
@Teresa Torres you are 100% right on that one – I got ahead of myself there as I dashed off a quick reply. Just trying to think through how to get out of the concept that it's all about feature feedback into more of a mindset that it's about customer feedback.
Teresa Torres reminds me that the interview snapshot’s value lies in creating the snapshot, not in reading it.

While these tools offer efficiency, they risk disconnecting us from the very essence of user research. We should be doing a majority of our user research manually because we need to understand our users, not just have reports about them.

Here’s why we need to keep that knowledge in our brains:

  1. Layered insights: Great product discoveries come from accumulating research over time. AI can’t replicate the nuanced understanding that builds up through repeated exposure to user feedback.
  2. Unexpected connections: Our brains are wired to make surprising links between seemingly unrelated data points. These “aha” moments often lead to breakthrough innovations.
  3. Stories stick: We remember people and their stories far better than we recall dry summaries. These vivid memories fuel our empathy and decision-making.
  4. Genuine empathy: Understanding users on a deep, emotional level is a uniquely human experience. AI can analyze sentiment, but it can’t truly feel what our users feel.
  5. Active processing: Reviewing recordings and notes helps cement learnings in our minds. This active engagement is crucial for drawing meaningful conclusions and long-term retention.

Use AI to streamline tedious tasks in user research. Use it as a thought partner and to summarize what you’ve already reviewed. Use it to pull up relevant video clips and categorize feedback.

Resist the temptation to outsource the entire process. The magic of product development lies in those human-to-human connections and the insights that come from accumulated observations layered over time. 

Just as we need those human-to-human connections to understand our users, we need real connections with stakeholders to get things done across complex organizations.

AI reports to read AI reports

AI-generated stakeholder reports should be standard in every company. Nobody likes writing status updates. But there’s a risk to them that we must acknowledge. 

A world where everyone reads AI-generated reports or constantly asks an LLM to understand what other teams are doing risks increased mis-alignment. 

Rather than do the hard work to get everyone on the same page, the AI will just communicate summaries back and forth that are in disagreement with each other. 

These are the reasons why AI reporting risks creating more misalignment between teams: 

  1. Trust-building can’t be automated: Alignment comes hand-in-hand with developing trust between individuals. 
  2. Misalignment often has underlying reasons: AI can’t understand company politics or stakeholder motivation
  3. Building understanding takes time: Sometimes people build understanding of each other and their ideas over the course of a conversation. It can’t be replicated by reading a well-structured memo without interaction. 

Alignment is based on shared understanding and relationships, which AI can’t do for us. If AI-generated reports can get us out of the silos and talking to each other (and out of long status meetings), I welcome it. But creating meeting summaries isn’t the same as actually creating alignment. 

Signs you’ve let AI get in the way of your relationships

Used correctly, these tools should actually allow us to have stronger relationships – making sure you’re sending stakeholders just the right level of detail that they need, or that you can reach out to customers who are experiencing the exact problem you’re trying to fix. 

A two-panel cartoon by Tom Fishburne humorously depicts the use of AI in workplace communication.

In the first panel, a man sitting at a desk excitedly gestures towards his computer screen while talking to a woman standing nearby. The caption reads: 'A.I. turns this single bullet point into a long email I can pretend I wrote.'

In the second panel, a different woman is sitting at a desk, smiling as she speaks to a standing man. Her computer screen displays a block of text. The caption reads: 'A.I. makes a single bullet point out of this long email I can pretend I read.'
I am scared for our future. Source

So what are the signs that you’re letting AI get between you and your relationships? 

  • Fewer real interactions: Authentic interactions have dwindled. You’re sending (and reading) more AI-generated messages and having fewer actual conversations.
  • Alignment Issues: Despite constant updates, the team lacks a unified direction. AI creates an illusion of alignment while masking misunderstandings.
  • Information Overload: You’re drowning in AI-generated reports and need AI to summarize them. 
  • Lost Connection to Users: If you can’t tell a specific story about a user by name, you’ve lost touch. No AI-generated report will capture the hesitation in a user’s voice when they talk about their pain points.
  • Overtrusting AI: You’re implementing AI recommendations without human review, leading to getting more done without understanding what you’re doing. 

When I run a product ops assessment at a new company, I do all the interviews myself.  I considered automating these interviews until I realized they aren’t just about information. They’re about trust.

That trust enables me to influence change later. The people I’m working with need to know I’m listening and that I care so they actually change later when we find the right solution. Deploying a bot to interview them sends the wrong message.

Look at the work you’re doing and check for these symptoms – if they’re present, you’re letting AI get in the way of your work, not help it. 

A humorous comparison graphic titled "What you get when you ask AI to create your graphics for you – Courtesy of ChatGPT 4o." It shows three different illustrated designs visualizing the same concept: how AI can interfere with human relationships.

All three images have the same text but filled with typos made by the AI.
What you get when you ask AI to create your graphics for you.

Finding the Sweet Spot: AI as Enhancer, Not Replacement

We might know the signs of when we’ve gone overboard with AI usage, but finding the right opportunities to use AI is just as important. 

These are the approaches I’m taking to AI tooling: 

  1. Let AI help me write, but don’t let it read for me. AI can draft reports and communications, but I synthesize user research and feedback to pull out the most important patterns. This ensures the knowledge lives in my brain, not just in AI summaries.
  2. Use AI for data crunching, but own the insights. AI excels at processing vast amounts of data, uncovering patterns no human could spot in the same timeframe. However, the interpretation and application of these insights should remain firmly in human hands. AI can suggest ideas, but I have to accept them.
  3. Automate routine tasks, preserve human connection. Let AI handle repetitive work like tagging user feedback or updating project statuses, while I focus on relationship-building conversations that require empathy and trust.
  4. Create space for high-value work. By automating the mundane, AI frees me up for what truly matters: meaningful conversations and deeper thinking with customers and colleagues.
  5. Use fake relationships to be user-centric in new ways. I recently started playing with a tool that creates synthetic audiences. I am using it for tasks where I usually wouldn’t conduct user research– subject lines for emails, wordsmithing. There are lots of cases where we can’t do more user research because it’s too expensive or time-consuming. These are the moments where getting a little synthetic feedback can help clarify my own decision-making.
Infographic titled "When I am willing to let AI 'drive'" with a subtitle that reads "...using an infographic made by AI." On the left, a grid lists five use cases for AI:

AI for Writing – Use AI to draft content but retain human synthesis for critical insights.

AI for Automation – Automate routine tasks to free up time for human interaction.

AI for Synthetic Feedback – Utilize AI for feedback where user research is impractical.

AI for Data – Leverage AI for data processing while maintaining human interpretation.

AI for High-Value Work – Use AI to create space for meaningful conversations and deeper thinking.

On the right, each use case is visually represented by a colored vertical line that connects to a circular icon, then curves into a stylized road, reinforcing the theme of AI “driving” select tasks. Each icon corresponds to a specific use case and represents its function. The background is white with a yellow border.
 I asked napkin.ai to help me generate an illustration to go with this section. 

Decide where the line is for you– what will you allow it to do for you, what do you want to do together, and what will you continue to do yourself. Teresa just published a first draft of her thoughts on where the line should be. 

When you feel that surge of excitement about a new AI use-case, ride that wave. Just take a pause before you go diving in head-first to make sure it’s actually going to help you build a better product. 

If we get to the point where product management is dead (presumably by AI in the cupboard with a GPU), much more than the title “product manager” will have died. The ability to build truly user-delighting products will be gone too.