Vol. 01 No. 03
The Producer's Field Manual
Tech Edition For B.
A Study Guide · Final Edition

From Producer
to Project
Manager
.

You already organize people, deadlines, approvals, and chaos. The lingo is different in tech, but the muscle is the same. This guide gets you fluent fast — Scrum, sprints, stories, APIs, CMS, the whole vocabulary — so you walk into the interview sounding like you've been doing this for years.
§ 01 — Orientation

The Big Picture

This is a delivery leadership role. You help a cross-functional team move from idea to shipped product. That team usually includes designers, frontend and backend engineers, QA testers, technical directors, producers, CMS authors, and client-side decision-makers.

You don't write code. You ask: "What has to happen for this to ship?" Then you break the answer into tickets, timelines, owners, risks, dependencies, and next steps.

Your job: turn messy work into organized work. Same instinct as producing campaigns. The vocabulary changes; the job doesn't.

§ 02 — Vocabulary

What Scrum Is

Scrum is not an acronym. Don't write it "SCRUM." It's named after the rugby formation where a team moves together in tight coordination.

Scrum organizes complex work. Instead of planning every detail upfront, the team works in short cycles, learns as it goes, and improves the product.

The rhythm: backlog → planning → sprint → daily standups → demo → retrospective → next sprint. Scrum gives the team a cadence, so everyone knows what's being worked on, what's blocked, what's next.

Scrum is not the same as Agile. Agile is the mindset. Scrum is one framework for practicing it.

Interview Line

"Scrum is a delivery framework that helps cross-functional teams plan, build, review, and improve work in short cycles. I'd use the ceremonies to keep design, engineering, production, and stakeholders aligned."

§ 03 — Vocabulary

What Agile Is

Agile favors smaller releases, faster learning, and adapting to change. Instead of perfecting a product on paper, the team ships useful pieces, gets feedback, and improves.

Agile does not mean "move fast with no process." That's nonsense in a blazer. Agile is flexible and disciplined. Work still needs priorities, owners, estimates, acceptance criteria, and QA.

For you: don't treat the plan like stone tablets from Mount Sinai. When new information lands, adjust without losing control.

§ 04 — Cast of Characters

The Roles

Official Scrum names three roles: Product Owner, Scrum Master, Developers. The Product Owner orders the backlog. Developers turn that backlog into shipped work each sprint. The Scrum Master keeps the process healthy.

This job is a blend of product manager, project manager, producer, and sometimes scrum master. Speak like the person who keeps the machine moving — not like an engineer.

Product Mgr
What are we building, for whom, and why?
Project Mgr
What is the plan, timeline, risk, dependency, and status?
Producer
How do we coordinate the people, approvals, assets, meetings, and delivery moments?
Scrum Master
Is the team following a healthy process, and what's blocking them?
Interview Line

"I see this role as a bridge between product intent, design direction, engineering delivery, and stakeholder expectations."

§ 05 — Ceremonies

The Sprint

A sprint is a fixed period where the team works toward a specific goal. Most sprints are one or two weeks. A new sprint starts the moment the last one ends.

A sprint isn't "a week of work." It's a planned delivery window. The team commits to a scope at the start, builds during it, and reviews at the end.

Example Sprint
Sprint Goal Launch the new article landing page.
Sprint Work Build the article card · connect CMS fields · add analytics tracking · test mobile · fix bugs · prep demo.

Your question is always: "What can the team realistically finish this sprint?" Not what the client wants. Not what leadership dreams about. What can be built, tested, reviewed, and accepted.

§ 06 — Ceremonies

Sprint Planning

Sprint Planning is the meeting at the start of the sprint. The team decides what work they're taking on. It's not a calendar exercise. It's a commitment.

You walk in knowing priorities, team capacity, ready tickets, key dependencies, and stakeholder deadlines.

Questions to drive

  • What's the sprint goal?
  • Which tickets are ready?
  • Are designs approved?
  • Are APIs ready?
  • Are CMS fields defined?
  • Are acceptance criteria clear?
  • Who is doing what?
  • What might block us?
  • What are we not taking this sprint?
Bad Planning "Here are 20 tickets. Can we get all this done?"
Good Planning "Our goal is to complete the CMS-driven article page. These five stories are ready. These two are blocked by analytics. Engineering estimates 21 points. Personalization moves to fast follow."
Interview Line

"In Sprint Planning, I'd make sure the team is planning around priority, capacity, readiness, and risk. I wouldn't let the team commit to work without approved designs, clear acceptance criteria, or resolved dependencies."

§ 07 — Ceremonies

The Daily Standup

The standup is the team's 15-minute daily check-in. Called a standup because the team stands. Standing keeps it short. Sitting turns it into an hour-long status meeting.

Each person answers three questions:

The Three Standup Questions
1. What did I do since the last standup?
2. What am I doing today?
3. What's blocking me?

Standup is not a status report for stakeholders. It's a coordination moment for the team. Listen for blockers, conflicts, and misalignment. Resolve them right after — that follow-up conversation is called the parking lot.

Bad Standup 45 minutes. Every engineer reads their entire ticket list. Stakeholders interrupt. Nobody writes down blockers. Same blockers come back tomorrow.
Good Standup 12 minutes. Each person hits the three questions briefly. Blockers noted. You grab the two people who need to coordinate, right after. Room clears.

Your job in standup

  • Keep it moving
  • Write down every blocker
  • Note when two people need a parking-lot conversation
  • Notice when a critical ticket isn't mentioned
  • Notice when the same blocker comes up three days running — that's an escalation
Interview Line

"In standup I'd keep the team focused on the three questions, capture blockers in real time, follow up immediately on cross-team coordination, and escalate persistent blockers before they put the sprint at risk."

§ 08 — Ceremonies

Backlog Refinement (a.k.a. Grooming)

Older teams say grooming. Current Scrum says Product Backlog Refinement. Same meeting: messy future work gets cleaned up before the team is asked to build it.

A vague backlog item like "Build homepage" isn't ready. It's too big. It hides too many pieces.

Splitting "Build homepage"
→ Create homepage hero section
→ Connect hero to CMS
→ Build promo card module
→ Add analytics on hero CTA
→ Build mobile layout
→ Handle empty state if no promo content
→ QA across desktop, tablet, mobile

Questions to ask in grooming

  • Is this ticket clear?
  • Is it too big?
  • Can it be split?
  • What design file should engineering use?
  • What does "done" mean?
  • What are the edge cases?
  • Are there API or CMS dependencies?
  • Does QA know what to test?
  • Is this ready for sprint planning?

Grooming isn't where you solve everything live. It's where you find the confusion early, before it blows up mid-sprint.

Interview Line

"In refinement, I'd turn large or vague work into clear, prioritized, estimable tickets — partnering with design and engineering to clarify scope, identify dependencies, and confirm readiness before a sprint commitment."

§ 09 — Ceremonies

Sprint Demo / Sprint Review

Teams call it the demo. Scrum officially calls it the Sprint Review. Same meeting: the team shows what was built and gathers feedback.

A demo isn't show-and-tell. It's a feedback and alignment meeting. The team shows what's done, what isn't, gathers reactions, and turns feedback into new backlog items.

Bad Demo "Here's the page. Thoughts?"
Good Demo "Our goal was to complete the CMS-driven article page. We finished desktop and mobile layout, CMS integration, and basic analytics. Related articles is in progress as a fast follow. Today we want feedback on content display, CTA behavior, and mobile stacking."

How to prep a demo

  • What was the sprint goal?
  • What got completed?
  • What didn't?
  • What decisions are needed?
  • What feedback do we want?
  • What should become a new ticket?
  • What risks should the room know?
Interview Line

"For demos, I'd make sure the team is showing real working progress against the sprint goal — not static screens. I'd capture feedback, separate must-have from nice-to-have, and convert follow-ups into prioritized backlog items."

§ 10 — Ceremonies

The Retrospective

The retrospective (or "retro") happens at the end of each sprint. The team looks back and asks: what should we do differently?

Teams that run good retros get better every sprint. Teams that skip retros repeat the same mistakes forever.

The Simplest Retro Format
What went well? Keep doing it.
What didn't go well? What hurt the team.
What should change? Concrete actions for next sprint.

Other formats: Start / Stop / Continue, Mad / Sad / Glad, Sailboat (winds, anchors, rocks). All surface the same thing — keep, drop, try.

What makes a retro useful

  • It produces action items with owners, not vague vibes
  • Last retro's actions get reviewed at the start of this one
  • People feel safe being honest
  • It stays focused on process, not individuals
Bad Retro "Sprint was fine. Anyone want to bring something up?" Silence. Meeting ends in 8 minutes. Nothing changes.
Good Retro Team identifies grooming ran long because tickets weren't pre-read. Action: PM circulates the grooming list 24 hours before. Owner: PM. Reviewed next retro.

As a producer, you have an edge here. Retros are continuous improvement — same instinct as a campaign post-mortem. Name what happened, name what to change, assign it, track it.

Interview Line

"I'd run retros to produce real change, not just vent. Every retro ends with two or three action items, each with an owner, and the next retro starts by reviewing whether those actions got done."

· · ·
§ 11 — Work Items

Tickets: Epics, Stories, Tasks & Bugs

First, what's a "ticket"?

A ticket is one unit of work, tracked in Jira (or whatever tool the team uses). Every story, task, bug, and sub-task is a ticket. People say "ticket" and "issue" interchangeably — they mean the same thing.

The work itself nests in a hierarchy:

Initiative — rare, used at larger orgs for multi-quarter programs ↳ Epic — spans multiple sprints ↳ Story · Task · Bug — fits inside one sprint ↳ Sub-task — a smaller piece inside the above

Epic vs Story

The trap most new PMs fall into. Hold it this way:

Epic
Big initiative. Too big for one sprint. Groups related stories. Answers: "What big thing are we doing?"
Story
One unit of user value. Fits in one sprint. Answers: "What specific value are we delivering to a user?"

An Epic contains stories. A Story can have sub-tasks. The rule of thumb:

  • If a "story" needs three sprints to finish — it's actually an epic. Break it down.
  • If an "epic" is one screen of work — it's actually a story.
  • If you can't write acceptance criteria for it, it's still an epic.

Worked Example: One Epic, Cascading Down

Take a homepage redesign. Here's the epic goal:

The Epic

Launch the redesigned homepage.

The stories that build into it — each one is user-facing value that fits in a sprint:

Stories Under the Epic
StoryAs a visitor, I want to see featured products on the homepage, so I can discover what's new.
StoryAs a visitor, I want to sign up for the newsletter, so I can get updates.
StoryAs a visitor, I want to navigate to category pages from the homepage, so I can browse what interests me.
StoryAs a visitor on mobile, I want a navigation menu sized for my thumb, so I can browse one-handed.

Then the supporting work — tasks, sub-tasks, and bugs that make those stories possible:

Supporting Work
TaskConfigure CMS fields for homepage hero.
TaskSet up homepage routing.
Sub-taskBuild the featured product card component.
Sub-taskBuild the newsletter signup form component.
BugHero image fails to load on slow connections.

That's the whole picture. One epic on top, stories below it, supporting work at the bottom. The epic is the goal. The stories are the value. The tasks and sub-tasks are how it gets built.

The Four Issue Types, Cleanly

Story
Describes user value. Written from a user's POV.
e.g. "As a shopper, I want to filter products by size, so I can find items that fit me."
Task
Work the team needs to do. Not always user-facing.
e.g. "Configure CMS fields for article author, publish date, hero image, body content."
Bug
Something broken or behaving incorrectly.
e.g. "Size filter doesn't clear after the user taps Reset."
Sub-task
A smaller piece inside a story, task, or bug. Used when one ticket has discrete chunks that get assigned separately.

Memory Trick

  • Epic = the big thing (multi-sprint)
  • Story = user value (fits in a sprint)
  • Task = team work (fits in a sprint)
  • Bug = broken thing
  • Sub-task = a piece inside the above
Interview Line

"Work items nest in a hierarchy. Epics group stories under a shared goal across multiple sprints. Stories deliver one unit of user value in a sprint. Tasks are team work that supports stories. Bugs are defects. Sub-tasks break complex tickets into assignable pieces."

§ 12 — Craft

How to Write a User Story

The format: "As a [user], I want [thing], so that [benefit]." The sentence alone isn't enough. A real story needs enough detail for the team to build and test it.

Weak "As a user, I want search."
Stronger "As a shopper, I want to search for products by keyword, so I can quickly find items I'm interested in buying."

Even that isn't enough on its own. A ready story includes acceptance criteria, design link, dependencies, out-of-scope notes, edge cases, and QA notes.

Ready-For-Engineering Story
Title Add keyword search to product listing page
Story As a shopper, I want to search for products by keyword, so I can quickly find items I'm interested in buying.
Acceptance Criteria User enters a keyword. Product grid updates after submit. Search term stays visible after results load. User can clear search. Empty state appears if no results. Works on desktop and mobile.
Dependencies Search API supports keyword query. Design provides empty state.
Out of Scope Typeahead suggestions. Personalized search ranking.
§ 13 — Craft

Acceptance Criteria

Acceptance criteria are the rules that define when a ticket is done. They tell engineering what to build, QA what to test, design and product what to accept.

This is where you provide real value.

Bad AC "Search works."
Good AC User types a term · submits the search · sees results update · can clear the search · sees a no-results state · sees a loading state · sees an error state if the request fails.

This is how you make work less ambiguous. It's also how you make engineers like you.

· · ·
§ 14 — Tech Vocabulary

APIs

API = Application Programming Interface. How one software system asks another for information or asks it to do something. Software talking to software.

Website "Is this jacket available in size medium?"
Inventory API "Yes — Store A." or "No, unavailable."

You don't code APIs. You need to know when one matters.

API questions to ask

  • Does this feature need data from another system?
  • Which system owns the data?
  • Does the API exist yet?
  • Is it documented?
  • What data does it return?
  • What happens if it's slow?
  • What happens if it fails?
  • What loading, empty, and error states do we need?
  • Is engineering blocked waiting on it?
  • Does QA have test data?

Example: the design shows store pickup availability. That screen only works if an inventory API can answer which stores have the product. Your job isn't to say "build the pickup module." Your job is to ask "do we have the inventory API ready, and what states do we need to design and test?"

§ 15 — Tech Vocabulary

CMS & Headless CMS

CMS = Content Management System. The tool non-engineers use to update content without asking developers to touch code.

What lives in a CMS: homepage hero copy, banners, blog articles, product descriptions, SEO titles, images, CTA labels, legal disclaimers, localized content.

Engineers build templates and components. Content teams fill them. The website pulls from the CMS and displays it.

Headless CMS

A headless CMS separates where content is managed from where it's displayed. The CMS stores content; the website, app, email, or kiosk decides how to display it. Same content, many channels.

CMS terms to know

Content type
The structure for a kind of content. e.g. Article, Promo Banner.
Field
A piece of content inside a content type. e.g. title, image, CTA link.
Entry
One actual piece of content created from a content type.
Asset
An uploaded image, video, PDF, or file.
Preview
See content before publishing.
Locale
A language or region version of content.
Validation
Rules that prevent bad content. e.g. CTA link is required.
Fallback
What appears if content is missing.

CMS questions to ask

  • What content needs to be managed in the CMS?
  • Who authors it?
  • What fields are needed?
  • Which fields are required?
  • Does it support preview?
  • Does content need approval before publishing?
  • Does content vary by country or language?
  • What happens if a field is empty?
  • Does the design match what the CMS can actually provide?
§ 16 — Tech Vocabulary

Localization & Internationalization

Localization means adapting content or a product for a specific language, region, or market. It's not just translation.

Localization covers: language, currency, date format, measurement units, spelling, legal copy, images, cultural references, product availability, regional promos.

Same idea, different locale
US: "Color," dollars, MM/DD/YYYY
UK: "Colour," pounds, DD/MM/YYYY
Canada: English and French may both be needed
Germany: German, euros, different privacy and legal copy

Internationalization (i18n)

The technical setup that makes localization possible. Shortened to i18n. Localization is shortened to l10n.

  • Internationalization = build the product so it can support multiple countries and languages.
  • Localization = create the actual country or language version.

You don't implement this. But it affects timeline. German text runs longer than English. Arabic reads right-to-left. Some markets need different legal language. Some images don't translate culturally.

Localization questions to ask

  • Which locales are in scope?
  • Are translations ready?
  • Does design support longer text?
  • Do we need right-to-left support?
  • Does CMS content vary by locale?
  • Are URLs localized?
  • Are legal or privacy rules different by market?
  • Who approves translated content?
· · ·
§ 17 — Estimation

Fibonacci Scoring & Story Points

Fibonacci scoring uses 1, 2, 3, 5, 8, 13, 21 to estimate the relative size or complexity of work.

Critical: points are not hours. A 5-point ticket isn't 5 hours. It's bigger or riskier than a 3, smaller than an 8.

Story points consider

  • Amount of work
  • Technical complexity
  • Uncertainty
  • Risk
  • Dependencies
  • Testing effort
  • Cross-team coordination
Rough sense of scale
1 point: Update button copy.
2 points: Add a new CMS field and display it.
3 points: Build a simple reusable banner using existing components.
5 points: Build a new product card variation with responsive behavior.
8 points: Build a new checkout step with API integration.
13 points: Large, risky — needs to be split.
21 points: Too big, unclear, or not ready.

Why Fibonacci? Gaps widen as work gets more uncertain. The difference between 1 and 2 is small. The difference between 8 and 13 is much bigger. That forces the team to admit when work isn't just "a little bigger" but meaningfully more complex.

§ 18 — Estimation

Estimation Exercises

Estimation is when the team reviews upcoming work and sizes each item. Engineering must be there — they understand the build effort.

You facilitate, not dominate.

Designer "This is just a simple carousel."
Engineer "Actually, it's an 8 — CMS fields, keyboard navigation, analytics, responsive behavior, custom animation."
You (the moment you earn the job) "What's driving the 8? Can we split the CMS setup from the frontend? Is there a simpler V1? What would make this a 5? What moves to fast follow?"

Listen for

  • Big estimates
  • Split estimates (one engineer says 3, another says 8)
  • Confusion
  • Missing designs
  • Unknown API behavior
  • Missing CMS model
  • Unclear acceptance criteria
  • QA complexity
  • Dependency on another team

If two engineers disagree — a 3 versus an 8 — that's not a problem. That's useful. It means the team doesn't yet share an understanding. The question that saves projects:

"What are you each assuming?"
§ 19 — Strategy

MVP & Fast Follow

MVP — Minimum Viable Product

MVP = Minimum Viable Product. The smallest version of a feature that's actually useful to a user. Not "the cheapest thing we can ship." Not "a broken prototype." Useful.

MVP
The smallest version that delivers real user value and can be shipped, used, and measured.
Not an MVP "A search field that doesn't return results yet." That's broken, not minimum-viable.
A real MVP "Keyword search that returns results, with a basic empty state. No typeahead. No personalization." It works. It's useful. The fancy stuff comes later.

When the room says "we need everything in V1," push for the MVP cut. What's the smallest thing that lets a real user do the thing? Everything else becomes fast follow.

Fast Follow

A fast follow is work the team does shortly after the first release. Not random leftover work. Intentionally deferred scope.

MVP + Fast Follow, in sequence
MVP (Release 1): Users can search products by keyword. Empty state and error state included.
Fast follow (Release 1.1): Search suggestions, typo correction, recent searches.
Fast follow (Release 1.2): Personalized search ranking.

Fast follows are useful when the deadline is fixed and the scope is too big. Use the phrase carefully. Every fast follow needs a ticket, an owner, a priority, a target release, a reason it was deferred, and a clear connection to the first release.

Bad "We'll do that later."
Good "Personalized recommendations are out of scope for launch. We'll backlog them as a fast follow for Release 1.1 because they need additional API and analytics work."
§ 20 — Strategy

Talking About Deadlines

Don't ask: "Can you just get it done?" That's how teams produce bad code, burned-out engineers, and fake commitments.

Ask instead

  • Given the deadline, what's the safest version we can ship?
  • What scope can we confidently deliver?
  • What would you recommend for MVP?
  • What has to be in for launch?
  • What can move to fast follow?
  • What are the risks if we keep this in scope?
  • What would we cut to hit the date?
  • Are we blocked by design, CMS, API, QA, or approvals?
  • Do we need more people, less scope, or more time?

The real levers

  • Reduce scope
  • Add time
  • Add resources
  • Reduce quality (usually dangerous)
  • Change the release strategy

Don't let teams pretend all four can stay fixed. If deadline and resources are fixed, scope has to flex.

Interview Line

"When deadlines are tight, I wouldn't pressure engineering into unrealistic commitments. I'd ask what version can ship safely, what risks need escalating, and what should be backlogged as a fast follow."

§ 21 — Planning

Backlog vs Roadmap vs Sprint

Three different things. Don't confuse them.

Roadmap
The bigger plan. Where the product is going over time.
Backlog
The ordered list of work that supports the roadmap.
Sprint
The short period where the team commits to a specific slice of backlog work.
Worked Example
Roadmap goal: Improve product discovery in Q3.
Backlog items: Add search, filters, sorting, recommendations, category pages.
Sprint work: Build keyword search and no-results state this sprint.

What a Healthy Backlog Looks Like

  • Clear priorities
  • Updated statuses
  • Owners
  • Estimates
  • Acceptance criteria
  • Dependencies
  • Design links
  • Release labels
  • Blocked items flagged clearly
  • Old or irrelevant items removed

A messy backlog is where projects go to die quietly.

§ 22 — Tools

Jira & the Kanban Board

Jira tracks every ticket through its lifecycle. The team's Kanban board is the visual version — tickets shown as cards, moving across columns by status.

This is what you stare at every day.

An Actual Board

Below is a snapshot of what the homepage epic looks like mid-sprint. Each card is one ticket. Each column is one status. Cards move left to right as work progresses.

To Do 3
Story Newsletter signup form 5
Task Configure CMS fields for hero 3
Story Category navigation links 3
In Progress 1
Story Featured products grid 8
In Review 1
Task Set up homepage analytics events 2
In QA 1
Story Mobile navigation menu 3
Done 2
Bug Fix logo alignment on small screens 1
Task Set up homepage routing 2

Sprint 4 · Homepage Redesign Epic

Read it like a story. Three tickets are ready to start. One engineer is actively building the featured products grid. One ticket is being code-reviewed. One is in QA. Two finished earlier this sprint.

Typical Ticket Lifecycle

The path a ticket walks
To Do → ready, not yet started
In Progress → an engineer is actively building it
In Review → code is up for peer review (a "PR")
In QA → merged code is being tested
Done → meets Definition of Done, ready or released
Some teams add Blocked, Ready for Release, Deployed.

A strong Jira ticket has

  • Clear title
  • User story or task description
  • Acceptance criteria
  • Design link
  • Priority and estimate
  • Owner
  • Dependencies and out-of-scope notes
  • QA notes · CMS notes
  • Release label · Status
Bad title "Homepage stuff."
Good title "Build CMS-driven homepage hero with headline, image, CTA, and mobile layout."

One Feature, Many Tickets

Say the design shows a homepage promo banner. Don't create one ticket called "Build promo banner." You probably need:

→ Define CMS content model for promo banner
→ Build frontend promo banner component
→ Connect promo banner to CMS
→ Add validation for required fields
→ Add analytics on CTA click
→ QA across breakpoints
→ Build fallback state if content is missing

That's how a PM thinks through the delivery chain.

§ 23 — Standards

Definition of Ready & Definition of Done

Teams that ship well have two written checklists: one for when a ticket is ready to enter a sprint, one for when a ticket is complete. Called Definition of Ready (DoR) and Definition of Done (DoD).

Definition of Ready

A ticket is ready when the team has everything they need to start — no surprises mid-sprint.

  • Description is clear
  • Acceptance criteria are written
  • Designs are approved and linked
  • Dependencies are identified (API, CMS, content)
  • Out-of-scope is written
  • Edge cases are noted
  • The ticket is small enough to estimate confidently
  • Engineering has had a chance to ask questions

If a ticket isn't ready, it doesn't enter the sprint. "This isn't ready yet — we need design sign-off and an API spec before we commit."

Definition of Done

"Done" does not mean "an engineer pushed code." Done usually means:

  • Work matches approved designs
  • Acceptance criteria are met
  • Code is reviewed (PR approved)
  • QA has tested it
  • Bugs are resolved or accepted
  • CMS content works (if needed)
  • It's ready for release or already released

Ask every team: "What's our Definition of Done?" Simple question. Prevents chaos.

Interview Line

"I'd push every team to write down their Definition of Ready and Definition of Done. It's the cheapest insurance against half-finished work, surprise dependencies, and shipping things that don't actually meet the spec."

§ 24 — Standards

Risk Management

The job description mentions proactively identifying risks and issues. That's a major PM skill.

Risk
Something that might become a problem.
Issue
Something that already is a problem.

Common risks

  • Designs not approved
  • API not ready
  • CMS model unclear
  • Feedback arriving late
  • Scope expanding mid-sprint (this is called scope creep)
  • QA environment unstable
  • Dependencies on another team
  • Translations delayed
  • Legal approval needed

The PM move: write the risk clearly, name the impact, assign an owner, create a mitigation plan.

Worked Risk Entry
RiskLocalization copy may not be ready before QA.
ImpactGerman and French pages may miss release.
OwnerContent team.
MitigationUse English fallback for internal QA by Friday. Require final translations by next Tuesday. Move non-critical locale updates to fast follow if needed.
· · ·
§ 25 — Quick Reference

The Rest of the Vocabulary

Terms you'll hear in standups, Slack threads, and interview rooms. Each one is a sentence so it doesn't catch you off guard. Skim once. Re-skim the morning of the interview.

Team & Process

Ticket
One unit of work in Jira. Story, task, bug, or sub-task. People say "ticket" and "issue" interchangeably.
Kanban Board
The visual layout of tickets as cards moving across columns by status. The team's daily view of work.
Velocity
Average story points a team completes per sprint. Used for forecasting. Not a productivity metric — don't weaponize it.
Capacity
How much work the team can realistically take in a sprint, given vacations, holidays, on-call, and meetings. Velocity adjusted for availability.
Blocker
Anything stopping a ticket from moving forward. Also called an "impediment." Needs an owner and an ETA — or it gets escalated.
Scope Creep
New requirements quietly added to an in-flight sprint or release without trading something out. Catch it and force the trade-off conversation.
Tech Debt
Shortcuts engineers took to ship fast. Slows down future work. Treat it as legitimate backlog work.
Parking Lot
A topic that comes up in standup that needs a real conversation. "Parking-lot" it — discuss after, with only the people involved.
Burndown Chart
A graph showing how much sprint work is left over time. Going down = good. Flat = trouble. Spike up = scope was added mid-sprint.

Code & Release

PR / Pull Request
An engineer's proposed code change, opened for review before being merged. You don't review PRs. You just need to know they're a step toward "done."
Code Review
The process where other engineers read and approve a PR. Adds time to "done" — factor it into estimates.
Merge
When approved code gets combined into the main branch. After merge, work moves toward QA / staging.
Staging
A copy of the live website where the team tests features before releasing them to real users. Looks like production. Isn't.
Production
The live website. Real users. Real consequences. "Shipping to production" = going live.
Hotfix
An emergency fix pushed to production outside the normal release cycle, usually for a critical bug or outage.
Rollback
Undoing a release because something broke. The faster the team can roll back, the safer they can ship.
Feature Flag
A toggle in the code that turns a feature on or off without re-releasing. Useful for shipping safely and rolling out to a subset of users.

Testing & Quality

QA
Quality Assurance. The team or process that tests the work against acceptance criteria before it ships.
Regression
When a new change accidentally breaks something that used to work. QA runs "regression testing" to catch these.
Smoke Test
A quick check that the most critical paths work after a release. "Is the site up? Can people log in? Can people check out?"
UAT
User Acceptance Testing. The final check — usually by the client or business stakeholder — that confirms work meets requirements before launch.
A / B Test
Shipping two versions of something to different user groups to see which performs better.

Engineering Mindset

Component
A reusable piece of UI — a button, card, banner — defined once, used in many places.
Design System
The library of reusable components, tokens, and patterns. Changes to design-system components ripple across every place they're used — so "small" changes can have big ripples.
Token
A named value used across the design system — a color, spacing unit, font size. "Update the primary color token" cascades everywhere.
Spike
A time-boxed research ticket. "Spend two days figuring out if this API can support our use case." Not building — investigating.
Refactor
Rewriting existing code to make it cleaner without changing what it does. Engineers often want to refactor. Sometimes it's worth it; sometimes it's procrastination dressed as work.
Edge Case
An unusual scenario the design or code might not handle — empty states, very long text, slow connections, errors.
Happy Path
The expected, everything-works scenario. Designs always show the happy path. Engineering has to handle everything else too.
Interview Move

If a term comes up you don't know — say so. "I haven't worked with that specifically — can you tell me how your team uses it?" Stronger than bluffing. Strong PMs ask. Weak PMs nod.

· · ·
§ 26 — The One Line

The Golden Sentence

If you remember nothing else, remember this. Practice it out loud until it feels natural.

"Can we separate what is required for launch from what can move to fast follow?" The Sentence Worth Memorizing

It protects the deadline. It protects the engineers. It protects the product. It protects the client relationship. And it makes you sound like someone who understands delivery.

§ 27 — Drill

Practice Exercise

Here's a fake feature. Practice breaking it down out loud, like you would in the interview.

The Prompt

"The client wants a new homepage hero that changes by country, pulls content from the CMS, supports English and Spanish, includes an image, headline, body copy, CTA, analytics — and launches in two weeks."

A Strong Answer Sounds Like

"I'd first confirm what's truly needed for Release 1. I'd create stories for the CMS content model, frontend hero component, localization support, analytics tracking, and QA. I'd ask engineering whether the CMS and localization setup already exists or needs new work. I'd ask design for all responsive states and content fallbacks. If the two-week deadline is fixed, I'd ask what can safely ship first and what should be backlogged as a fast follow."

That answer hits scope, dependencies, risk, deadline, and trade-offs — without pretending to know engineering.

§ 28 — Pocket Reference

One-Page Cheat Sheet

The whole thing, distilled. Bookmark this. Re-read the night before the interview.

Scrum

Not an acronym. A framework for delivering complex work in short cycles.

Agile

The mindset behind Scrum. Smaller releases, faster learning, adapt to change — but still disciplined.

Sprint

A fixed work period where the team commits to a realistic goal.

Sprint Planning

Start-of-sprint meeting. Decides what the team will work on.

Daily Standup

15-minute daily check-in. Three questions: done since last, doing today, blocked by what.

Backlog Refinement

Cleans up future work so it's clear, small, and estimable. Also called grooming.

Sprint Demo / Review

End-of-sprint meeting. Team shows completed work, gets feedback, captures follow-ups.

Retrospective

End-of-sprint meeting. What went well, what didn't, what to change. Produces action items.

Ticket

One unit of work in Jira. Every story, task, bug, or sub-task is a ticket.

Kanban Board

Visual layout of tickets as cards moving across status columns.

Epic

Big initiative. Multi-sprint. Groups related stories.

Story

One unit of user value. Fits in one sprint. Written from user POV.

Task

Team work. Fits in one sprint. Not user-facing.

Bug

Something broken or behaving incorrectly.

Sub-task

A piece of a story, task, or bug.

Acceptance Criteria

Rules that define when a ticket is done. Tells engineering what to build, QA what to test.

API

How software systems talk to each other.

CMS

Lets non-engineers manage and publish content.

Headless CMS

CMS that stores content; the website, app, or email decides how to display it.

Localization (l10n)

Adapts content and experiences for specific languages, regions, markets.

Internationalization (i18n)

The technical setup that makes localization possible.

Fibonacci Scoring

1, 2, 3, 5, 8, 13, 21 — relative complexity, not exact time.

Velocity

Average points per sprint. Used for forecasting, not judging people.

Capacity

What the team can realistically take this sprint, given vacations and meetings.

MVP

Minimum Viable Product. Smallest version that delivers real user value.

Fast Follow

Planned work that comes shortly after launch. Not forgotten work.

Definition of Ready

Checklist for whether a ticket is ready to enter a sprint.

Definition of Done

Checklist for whether a ticket is truly complete — not just "code pushed."

Blocker

Anything stopping work. Needs an owner and an ETA, or it gets escalated.

Scope Creep

New requirements quietly added mid-sprint without trading something out.

Tech Debt

Shortcuts engineers took to ship fast. Real backlog work.

PR · Code Review · Merge

Proposed change → reviewed by peers → combined into main branch.

Staging vs Production

Staging = test environment. Production = live, real users.

Hotfix · Rollback

Hotfix = emergency fix to production. Rollback = undo a release because something broke.

QA · Regression · UAT · Smoke Test

QA tests against AC. Regression catches what broke. UAT is final business sign-off. Smoke test is a quick "basics still work" check after release.

Component · Token · Design System

Components = reusable UI parts. Tokens = named values. Design system = the library that holds both.

Spike · Refactor · Edge Case · Happy Path

Spike = research. Refactor = clean code. Edge case = unusual. Happy path = expected.

Deadlines

Strong PMs ask what can ship safely, what risks exist, what moves to next release. They don't ask engineers to magically hit a date.

The Golden Sentence

"Can we separate what is required for launch from what can move to fast follow?"

When you don't know a term

"I haven't worked with that — can you tell me how your team uses it?" Strong PMs ask. Weak PMs nod.

Go get the job.