Project Management

Using ChatGPT in Project Management: Where It Adds Real Value (and Where It Adds Risk)

Standarity Editorial Team·PMs Using AI in Production Work
··7 min read

Project managers were among the early adopters of generative AI for routine workflow — the work has many text-heavy artefacts (status reports, stakeholder updates, risk register entries, meeting notes, communication plans) that benefit from drafting assistance. Two years into this adoption, the patterns of what works and what does not have become clearer. PMs using AI well report meaningful productivity gains. PMs using AI badly are producing artefacts that look polished and contain confidently wrong details, and the downstream cost of those details is starting to show up in audit findings, stakeholder confusion, and decisions made on incorrect inputs.

Where AI Genuinely Helps PM Work

First-draft generation of routine artefacts. Status reports, stakeholder update emails, meeting agendas, follow-up summaries, risk register entries from incident descriptions. The PM still reviews and edits, but starting from a structured first pass is meaningfully faster than starting from blank. Synthesis of long meeting transcripts or large document sets into structured summaries. Translation between technical and business audiences for the same content. Generation of structured templates from informal descriptions. Each of these workflows preserves the PM's editorial role while accelerating the parts that are mechanical.

Where AI Adds Risk Without Comparable Value

Detailed schedule generation without verification. The AI will produce a reasonable-looking schedule with dates that are not connected to actual team capacity, dependencies, or organisational reality. Stakeholder analysis from limited input. The AI will produce a polished stakeholder map that may bear no resemblance to the actual political dynamics. Risk identification without context. The AI will list generic risks that look comprehensive but miss the specific risks the project actually faces. In each of these cases, the polished output creates false confidence in work that has not been done.

The Confabulation Problem

Generative AI will produce confident-sounding details that are factually wrong. Specific metrics that the model has invented. References to standards or frameworks with details that do not match the actual standard. Project history that conflates multiple events. When these details land in PM artefacts that get distributed to stakeholders or filed as project records, the consequences range from embarrassment to genuine misinformation. The PMs who use AI well never publish AI output without verifying the specific factual claims against authoritative sources.

A pattern in audit findings: a PM generates a project charter with AI assistance, references standards or methodology with confident-sounding specificity, and submits the artefact. Months later an auditor finds that the cited methodology versions are wrong or the regulatory references are misstated. The PM did not write the wrong references — but they did publish them. AI output that lands in formal records inherits the publisher's responsibility for accuracy, not the AI's.

Prompts That Produce Useful Output

Prompts that ask the AI to draft an artefact from scratch produce generic output. Prompts that provide concrete context and ask for transformation produce useful output. "Write a status report for a software project" is a generic prompt. "Here are this week's engineering completions, three open risks, and a stakeholder concern about timeline. Draft a status report in our standard format that emphasises the timeline concern" is a useful prompt. The PMs who get value from AI invest in providing context. The PMs who do not invest in context get generic artefacts they then have to substantially rewrite.

Verification Habits That Hold Up

  • Treat AI output as a draft that requires editorial review, not as a finished artefact
  • Verify specific factual claims (dates, names, metrics, standard references) against authoritative sources
  • Resist asking the AI for analysis the PM has not done — it will invent the analysis
  • Provide structured context in prompts; generic prompts produce generic output
  • Keep customer-facing communication out of pure AI workflow until your review process is reliable
  • Document which artefacts had AI assistance and which were authored manually — useful for retrospective quality review

The Productivity Reality

PMs using AI thoughtfully report productivity gains in the 20-30% range on the workflows where it applies. The gains are real, but they are bounded — PM work has many components AI cannot meaningfully accelerate. Stakeholder management still requires presence. Judgement under ambiguity still requires the PM's pattern recognition. Decision-making in conflict still requires authority. The productivity gains come from compressing the routine parts of the job so the PM can spend more of their time on the parts that are not routine. PMs who treat AI as a force multiplier on the routine work outperform PMs who either reject it or who use it carelessly.

Explore Courses on Udemy

Intermediate

ChatGPT for Product Management & Innovation, 100+ Prompts

Intermediate

Improving Project Success & Avoiding Common Pitfalls

Intermediate

Project Management With ChatGPT

Intermediate

ChatGPT for Product Management & Innovation, 100+ Prompts