Product managers are currently surrounded by AI marketing. Every PM tool has added a "ChatGPT-powered" feature. Every newsletter promises that GenAI will transform product management. The reality is more uneven. Some applications of GenAI to PM work are genuinely transformative — they collapse hours of work into minutes. Others are theatre — tools that sound impressive but produce output a competent PM would have rejected on the first read. The split is worth understanding before reorganising your workflow around the technology.
Where AI Genuinely Helps
Synthesising research is the strongest use case. Customer interview transcripts, support tickets, NPS verbatims, sales call notes — text data that arrives in volume and that humans struggle to read at scale. GenAI is excellent at extracting themes, surfacing common patterns, and producing structured summaries that the PM can verify against the source. The PM still has to review the summaries critically, but the lift from raw text to actionable signal is dramatically faster than manual coding.
First drafts of artefacts is the next solid use case. PRDs, user stories, release notes, internal memos, customer-facing documentation. The first draft is rarely the final draft, but starting from a structured first pass and editing it is meaningfully faster than starting from a blank page. The PM provides the structure, the inputs, and the editorial judgement; the AI accelerates the production.
Competitive analysis is the third strong area, with caveats. Generating structured comparisons of competitor positioning, feature coverage, and pricing models is fast. The output needs verification because GenAI confabulates pricing details and feature claims regularly. With a verification step, the workflow is still substantially faster than manual analysis.
Where AI Does Not Help (Yet)
Strategic prioritisation is not a fit. The decision about which feature to build first depends on customer signal, business strategy, technical feasibility, and the PM's judgement about how the market is moving. GenAI can help with each input, but the synthesis is the work — and current models do not produce strategic judgement that a competent PM would trust. Tools that promise to "auto-prioritise your backlog" are mostly producing plausible-sounding output that bears no relationship to what should actually be built first.
Customer relationships are not a fit. AI-generated outreach to specific customers is detectable, often in ways that damage trust. The cost-of-being-wrong calculation comes out badly — the time saved is small, the relationship cost is potentially large. Use AI for internal artefacts; write to customers yourself.
Quantitative product analytics is also not yet a fit. AI tools that "answer questions about your product data" produce confident answers that are sometimes correct and sometimes catastrophically wrong. PM decisions made on these answers without independent verification are a significant production-risk source. Use the underlying analytics tools directly until the AI layer is reliable.
A pattern that catches early adopters: an AI tool produces a customer-facing roadmap commitment based on a mis-summarised internal discussion. The commitment goes out, the customer holds the company to it, and the PM has to either deliver something nobody actually planned or backtrack publicly. Treat AI output that lands in customer-facing artefacts as untrusted until verified — same standard as any other input.
How to Build the Workflow
- Identify three high-leverage workflows and integrate AI deliberately rather than experimenting everywhere
- Always verify AI output against the underlying source — research synthesis is only useful if it is faithful to the data
- Keep customer-facing communication out of AI workflows until your verification process is reliable
- Document prompts that work — they are organisational knowledge, like SQL queries or Looker explores
- Treat the PM's editorial judgement as the value-adding step, not the AI generation
The Honest Productivity Number
PMs who adopt AI thoughtfully report meaningful productivity gains — often in the 20–30% range on the workflows where it applies, lower on overall job throughput because PM work has many components AI does not help with. PMs who adopt AI without discipline report productivity gains of similar magnitude on average and produce work of lower quality, because the AI output goes out without the review step that closes the gap. The technology is genuinely useful. The discipline of how to use it is what determines the outcome.