The Big Picture
This is a delivery leadership role. You help a cross-functional team move from idea to shipped product. That team usually includes designers, frontend and backend engineers, QA testers, technical directors, producers, CMS authors, and client-side decision-makers.
You don't write code. You ask: "What has to happen for this to ship?" Then you break the answer into tickets, timelines, owners, risks, dependencies, and next steps.
Your job: turn messy work into organized work. Same instinct as producing campaigns. The vocabulary changes; the job doesn't.
What Scrum Is
Scrum is not an acronym. Don't write it "SCRUM." It's named after the rugby formation where a team moves together in tight coordination.
Scrum organizes complex work. Instead of planning every detail upfront, the team works in short cycles, learns as it goes, and improves the product.
The rhythm: backlog → planning → sprint → daily standups → demo → retrospective → next sprint. Scrum gives the team a cadence, so everyone knows what's being worked on, what's blocked, what's next.
Scrum is not the same as Agile. Agile is the mindset. Scrum is one framework for practicing it.
"Scrum is a delivery framework that helps cross-functional teams plan, build, review, and improve work in short cycles. I'd use the ceremonies to keep design, engineering, production, and stakeholders aligned."
What Agile Is
Agile favors smaller releases, faster learning, and adapting to change. Instead of perfecting a product on paper, the team ships useful pieces, gets feedback, and improves.
Agile does not mean "move fast with no process." That's nonsense in a blazer. Agile is flexible and disciplined. Work still needs priorities, owners, estimates, acceptance criteria, and QA.
For you: don't treat the plan like stone tablets from Mount Sinai. When new information lands, adjust without losing control.
The Roles
Official Scrum names three roles: Product Owner, Scrum Master, Developers. The Product Owner orders the backlog. Developers turn that backlog into shipped work each sprint. The Scrum Master keeps the process healthy.
This job is a blend of product manager, project manager, producer, and sometimes scrum master. Speak like the person who keeps the machine moving — not like an engineer.
- Product Mgr
- What are we building, for whom, and why?
- Project Mgr
- What is the plan, timeline, risk, dependency, and status?
- Producer
- How do we coordinate the people, approvals, assets, meetings, and delivery moments?
- Scrum Master
- Is the team following a healthy process, and what's blocking them?
"I see this role as a bridge between product intent, design direction, engineering delivery, and stakeholder expectations."
The Sprint
A sprint is a fixed period where the team works toward a specific goal. Most sprints are one or two weeks. A new sprint starts the moment the last one ends.
A sprint isn't "a week of work." It's a planned delivery window. The team commits to a scope at the start, builds during it, and reviews at the end.
Example Sprint
Your question is always: "What can the team realistically finish this sprint?" Not what the client wants. Not what leadership dreams about. What can be built, tested, reviewed, and accepted.
Sprint Planning
Sprint Planning is the meeting at the start of the sprint. The team decides what work they're taking on. It's not a calendar exercise. It's a commitment.
You walk in knowing priorities, team capacity, ready tickets, key dependencies, and stakeholder deadlines.
Questions to drive
- What's the sprint goal?
- Which tickets are ready?
- Are designs approved?
- Are APIs ready?
- Are CMS fields defined?
- Are acceptance criteria clear?
- Who is doing what?
- What might block us?
- What are we not taking this sprint?
"In Sprint Planning, I'd make sure the team is planning around priority, capacity, readiness, and risk. I wouldn't let the team commit to work without approved designs, clear acceptance criteria, or resolved dependencies."
The Daily Standup
The standup is the team's 15-minute daily check-in. Called a standup because the team stands. Standing keeps it short. Sitting turns it into an hour-long status meeting.
Each person answers three questions:
The Three Standup Questions
Standup is not a status report for stakeholders. It's a coordination moment for the team. Listen for blockers, conflicts, and misalignment. Resolve them right after — that follow-up conversation is called the parking lot.
Your job in standup
- Keep it moving
- Write down every blocker
- Note when two people need a parking-lot conversation
- Notice when a critical ticket isn't mentioned
- Notice when the same blocker comes up three days running — that's an escalation
"In standup I'd keep the team focused on the three questions, capture blockers in real time, follow up immediately on cross-team coordination, and escalate persistent blockers before they put the sprint at risk."
Backlog Refinement (a.k.a. Grooming)
Older teams say grooming. Current Scrum says Product Backlog Refinement. Same meeting: messy future work gets cleaned up before the team is asked to build it.
A vague backlog item like "Build homepage" isn't ready. It's too big. It hides too many pieces.
Splitting "Build homepage"
Questions to ask in grooming
- Is this ticket clear?
- Is it too big?
- Can it be split?
- What design file should engineering use?
- What does "done" mean?
- What are the edge cases?
- Are there API or CMS dependencies?
- Does QA know what to test?
- Is this ready for sprint planning?
Grooming isn't where you solve everything live. It's where you find the confusion early, before it blows up mid-sprint.
"In refinement, I'd turn large or vague work into clear, prioritized, estimable tickets — partnering with design and engineering to clarify scope, identify dependencies, and confirm readiness before a sprint commitment."
Sprint Demo / Sprint Review
Teams call it the demo. Scrum officially calls it the Sprint Review. Same meeting: the team shows what was built and gathers feedback.
A demo isn't show-and-tell. It's a feedback and alignment meeting. The team shows what's done, what isn't, gathers reactions, and turns feedback into new backlog items.
How to prep a demo
- What was the sprint goal?
- What got completed?
- What didn't?
- What decisions are needed?
- What feedback do we want?
- What should become a new ticket?
- What risks should the room know?
"For demos, I'd make sure the team is showing real working progress against the sprint goal — not static screens. I'd capture feedback, separate must-have from nice-to-have, and convert follow-ups into prioritized backlog items."
The Retrospective
The retrospective (or "retro") happens at the end of each sprint. The team looks back and asks: what should we do differently?
Teams that run good retros get better every sprint. Teams that skip retros repeat the same mistakes forever.
The Simplest Retro Format
Other formats: Start / Stop / Continue, Mad / Sad / Glad, Sailboat (winds, anchors, rocks). All surface the same thing — keep, drop, try.
What makes a retro useful
- It produces action items with owners, not vague vibes
- Last retro's actions get reviewed at the start of this one
- People feel safe being honest
- It stays focused on process, not individuals
As a producer, you have an edge here. Retros are continuous improvement — same instinct as a campaign post-mortem. Name what happened, name what to change, assign it, track it.
"I'd run retros to produce real change, not just vent. Every retro ends with two or three action items, each with an owner, and the next retro starts by reviewing whether those actions got done."
Tickets: Epics, Stories, Tasks & Bugs
First, what's a "ticket"?
A ticket is one unit of work, tracked in Jira (or whatever tool the team uses). Every story, task, bug, and sub-task is a ticket. People say "ticket" and "issue" interchangeably — they mean the same thing.
The work itself nests in a hierarchy:
Epic vs Story
The trap most new PMs fall into. Hold it this way:
- Epic
- Big initiative. Too big for one sprint. Groups related stories. Answers: "What big thing are we doing?"
- Story
- One unit of user value. Fits in one sprint. Answers: "What specific value are we delivering to a user?"
An Epic contains stories. A Story can have sub-tasks. The rule of thumb:
- If a "story" needs three sprints to finish — it's actually an epic. Break it down.
- If an "epic" is one screen of work — it's actually a story.
- If you can't write acceptance criteria for it, it's still an epic.
Worked Example: One Epic, Cascading Down
Take a homepage redesign. Here's the epic goal:
The Epic
Launch the redesigned homepage.
The stories that build into it — each one is user-facing value that fits in a sprint:
Stories Under the Epic
Then the supporting work — tasks, sub-tasks, and bugs that make those stories possible:
Supporting Work
That's the whole picture. One epic on top, stories below it, supporting work at the bottom. The epic is the goal. The stories are the value. The tasks and sub-tasks are how it gets built.
The Four Issue Types, Cleanly
- Story
- Describes user value. Written from a user's POV.
- e.g. "As a shopper, I want to filter products by size, so I can find items that fit me."
- Task
- Work the team needs to do. Not always user-facing.
- e.g. "Configure CMS fields for article author, publish date, hero image, body content."
- Bug
- Something broken or behaving incorrectly.
- e.g. "Size filter doesn't clear after the user taps Reset."
- Sub-task
- A smaller piece inside a story, task, or bug. Used when one ticket has discrete chunks that get assigned separately.
Memory Trick
- Epic = the big thing (multi-sprint)
- Story = user value (fits in a sprint)
- Task = team work (fits in a sprint)
- Bug = broken thing
- Sub-task = a piece inside the above
"Work items nest in a hierarchy. Epics group stories under a shared goal across multiple sprints. Stories deliver one unit of user value in a sprint. Tasks are team work that supports stories. Bugs are defects. Sub-tasks break complex tickets into assignable pieces."
How to Write a User Story
The format: "As a [user], I want [thing], so that [benefit]." The sentence alone isn't enough. A real story needs enough detail for the team to build and test it.
Even that isn't enough on its own. A ready story includes acceptance criteria, design link, dependencies, out-of-scope notes, edge cases, and QA notes.
Ready-For-Engineering Story
Acceptance Criteria
Acceptance criteria are the rules that define when a ticket is done. They tell engineering what to build, QA what to test, design and product what to accept.
This is where you provide real value.
This is how you make work less ambiguous. It's also how you make engineers like you.
APIs
API = Application Programming Interface. How one software system asks another for information or asks it to do something. Software talking to software.
You don't code APIs. You need to know when one matters.
API questions to ask
- Does this feature need data from another system?
- Which system owns the data?
- Does the API exist yet?
- Is it documented?
- What data does it return?
- What happens if it's slow?
- What happens if it fails?
- What loading, empty, and error states do we need?
- Is engineering blocked waiting on it?
- Does QA have test data?
Example: the design shows store pickup availability. That screen only works if an inventory API can answer which stores have the product. Your job isn't to say "build the pickup module." Your job is to ask "do we have the inventory API ready, and what states do we need to design and test?"
CMS & Headless CMS
CMS = Content Management System. The tool non-engineers use to update content without asking developers to touch code.
What lives in a CMS: homepage hero copy, banners, blog articles, product descriptions, SEO titles, images, CTA labels, legal disclaimers, localized content.
Engineers build templates and components. Content teams fill them. The website pulls from the CMS and displays it.
Headless CMS
A headless CMS separates where content is managed from where it's displayed. The CMS stores content; the website, app, email, or kiosk decides how to display it. Same content, many channels.
CMS terms to know
- Content type
- The structure for a kind of content. e.g. Article, Promo Banner.
- Field
- A piece of content inside a content type. e.g. title, image, CTA link.
- Entry
- One actual piece of content created from a content type.
- Asset
- An uploaded image, video, PDF, or file.
- Preview
- See content before publishing.
- Locale
- A language or region version of content.
- Validation
- Rules that prevent bad content. e.g. CTA link is required.
- Fallback
- What appears if content is missing.
CMS questions to ask
- What content needs to be managed in the CMS?
- Who authors it?
- What fields are needed?
- Which fields are required?
- Does it support preview?
- Does content need approval before publishing?
- Does content vary by country or language?
- What happens if a field is empty?
- Does the design match what the CMS can actually provide?
Localization & Internationalization
Localization means adapting content or a product for a specific language, region, or market. It's not just translation.
Localization covers: language, currency, date format, measurement units, spelling, legal copy, images, cultural references, product availability, regional promos.
Same idea, different locale
Internationalization (i18n)
The technical setup that makes localization possible. Shortened to i18n. Localization is shortened to l10n.
- Internationalization = build the product so it can support multiple countries and languages.
- Localization = create the actual country or language version.
You don't implement this. But it affects timeline. German text runs longer than English. Arabic reads right-to-left. Some markets need different legal language. Some images don't translate culturally.
Localization questions to ask
- Which locales are in scope?
- Are translations ready?
- Does design support longer text?
- Do we need right-to-left support?
- Does CMS content vary by locale?
- Are URLs localized?
- Are legal or privacy rules different by market?
- Who approves translated content?
Fibonacci Scoring & Story Points
Fibonacci scoring uses 1, 2, 3, 5, 8, 13, 21 to estimate the relative size or complexity of work.
Critical: points are not hours. A 5-point ticket isn't 5 hours. It's bigger or riskier than a 3, smaller than an 8.
Story points consider
- Amount of work
- Technical complexity
- Uncertainty
- Risk
- Dependencies
- Testing effort
- Cross-team coordination
Rough sense of scale
Why Fibonacci? Gaps widen as work gets more uncertain. The difference between 1 and 2 is small. The difference between 8 and 13 is much bigger. That forces the team to admit when work isn't just "a little bigger" but meaningfully more complex.
Estimation Exercises
Estimation is when the team reviews upcoming work and sizes each item. Engineering must be there — they understand the build effort.
You facilitate, not dominate.
Listen for
- Big estimates
- Split estimates (one engineer says 3, another says 8)
- Confusion
- Missing designs
- Unknown API behavior
- Missing CMS model
- Unclear acceptance criteria
- QA complexity
- Dependency on another team
If two engineers disagree — a 3 versus an 8 — that's not a problem. That's useful. It means the team doesn't yet share an understanding. The question that saves projects:
MVP & Fast Follow
MVP — Minimum Viable Product
MVP = Minimum Viable Product. The smallest version of a feature that's actually useful to a user. Not "the cheapest thing we can ship." Not "a broken prototype." Useful.
- MVP
- The smallest version that delivers real user value and can be shipped, used, and measured.
When the room says "we need everything in V1," push for the MVP cut. What's the smallest thing that lets a real user do the thing? Everything else becomes fast follow.
Fast Follow
A fast follow is work the team does shortly after the first release. Not random leftover work. Intentionally deferred scope.
MVP + Fast Follow, in sequence
Fast follows are useful when the deadline is fixed and the scope is too big. Use the phrase carefully. Every fast follow needs a ticket, an owner, a priority, a target release, a reason it was deferred, and a clear connection to the first release.
Talking About Deadlines
Don't ask: "Can you just get it done?" That's how teams produce bad code, burned-out engineers, and fake commitments.
Ask instead
- Given the deadline, what's the safest version we can ship?
- What scope can we confidently deliver?
- What would you recommend for MVP?
- What has to be in for launch?
- What can move to fast follow?
- What are the risks if we keep this in scope?
- What would we cut to hit the date?
- Are we blocked by design, CMS, API, QA, or approvals?
- Do we need more people, less scope, or more time?
The real levers
- Reduce scope
- Add time
- Add resources
- Reduce quality (usually dangerous)
- Change the release strategy
Don't let teams pretend all four can stay fixed. If deadline and resources are fixed, scope has to flex.
"When deadlines are tight, I wouldn't pressure engineering into unrealistic commitments. I'd ask what version can ship safely, what risks need escalating, and what should be backlogged as a fast follow."
Backlog vs Roadmap vs Sprint
Three different things. Don't confuse them.
- Roadmap
- The bigger plan. Where the product is going over time.
- Backlog
- The ordered list of work that supports the roadmap.
- Sprint
- The short period where the team commits to a specific slice of backlog work.
Worked Example
What a Healthy Backlog Looks Like
- Clear priorities
- Updated statuses
- Owners
- Estimates
- Acceptance criteria
- Dependencies
- Design links
- Release labels
- Blocked items flagged clearly
- Old or irrelevant items removed
A messy backlog is where projects go to die quietly.
Jira & the Kanban Board
Jira tracks every ticket through its lifecycle. The team's Kanban board is the visual version — tickets shown as cards, moving across columns by status.
This is what you stare at every day.
An Actual Board
Below is a snapshot of what the homepage epic looks like mid-sprint. Each card is one ticket. Each column is one status. Cards move left to right as work progresses.
To Do 3
In Progress 1
In Review 1
In QA 1
Done 2
Sprint 4 · Homepage Redesign Epic
Read it like a story. Three tickets are ready to start. One engineer is actively building the featured products grid. One ticket is being code-reviewed. One is in QA. Two finished earlier this sprint.
Typical Ticket Lifecycle
The path a ticket walks
A strong Jira ticket has
- Clear title
- User story or task description
- Acceptance criteria
- Design link
- Priority and estimate
- Owner
- Dependencies and out-of-scope notes
- QA notes · CMS notes
- Release label · Status
One Feature, Many Tickets
Say the design shows a homepage promo banner. Don't create one ticket called "Build promo banner." You probably need:
That's how a PM thinks through the delivery chain.
Definition of Ready & Definition of Done
Teams that ship well have two written checklists: one for when a ticket is ready to enter a sprint, one for when a ticket is complete. Called Definition of Ready (DoR) and Definition of Done (DoD).
Definition of Ready
A ticket is ready when the team has everything they need to start — no surprises mid-sprint.
- Description is clear
- Acceptance criteria are written
- Designs are approved and linked
- Dependencies are identified (API, CMS, content)
- Out-of-scope is written
- Edge cases are noted
- The ticket is small enough to estimate confidently
- Engineering has had a chance to ask questions
If a ticket isn't ready, it doesn't enter the sprint. "This isn't ready yet — we need design sign-off and an API spec before we commit."
Definition of Done
"Done" does not mean "an engineer pushed code." Done usually means:
- Work matches approved designs
- Acceptance criteria are met
- Code is reviewed (PR approved)
- QA has tested it
- Bugs are resolved or accepted
- CMS content works (if needed)
- It's ready for release or already released
Ask every team: "What's our Definition of Done?" Simple question. Prevents chaos.
"I'd push every team to write down their Definition of Ready and Definition of Done. It's the cheapest insurance against half-finished work, surprise dependencies, and shipping things that don't actually meet the spec."
Risk Management
The job description mentions proactively identifying risks and issues. That's a major PM skill.
- Risk
- Something that might become a problem.
- Issue
- Something that already is a problem.
Common risks
- Designs not approved
- API not ready
- CMS model unclear
- Feedback arriving late
- Scope expanding mid-sprint (this is called scope creep)
- QA environment unstable
- Dependencies on another team
- Translations delayed
- Legal approval needed
The PM move: write the risk clearly, name the impact, assign an owner, create a mitigation plan.
Worked Risk Entry
The Rest of the Vocabulary
Terms you'll hear in standups, Slack threads, and interview rooms. Each one is a sentence so it doesn't catch you off guard. Skim once. Re-skim the morning of the interview.
Team & Process
- Ticket
- One unit of work in Jira. Story, task, bug, or sub-task. People say "ticket" and "issue" interchangeably.
- Kanban Board
- The visual layout of tickets as cards moving across columns by status. The team's daily view of work.
- Velocity
- Average story points a team completes per sprint. Used for forecasting. Not a productivity metric — don't weaponize it.
- Capacity
- How much work the team can realistically take in a sprint, given vacations, holidays, on-call, and meetings. Velocity adjusted for availability.
- Blocker
- Anything stopping a ticket from moving forward. Also called an "impediment." Needs an owner and an ETA — or it gets escalated.
- Scope Creep
- New requirements quietly added to an in-flight sprint or release without trading something out. Catch it and force the trade-off conversation.
- Tech Debt
- Shortcuts engineers took to ship fast. Slows down future work. Treat it as legitimate backlog work.
- Parking Lot
- A topic that comes up in standup that needs a real conversation. "Parking-lot" it — discuss after, with only the people involved.
- Burndown Chart
- A graph showing how much sprint work is left over time. Going down = good. Flat = trouble. Spike up = scope was added mid-sprint.
Code & Release
- PR / Pull Request
- An engineer's proposed code change, opened for review before being merged. You don't review PRs. You just need to know they're a step toward "done."
- Code Review
- The process where other engineers read and approve a PR. Adds time to "done" — factor it into estimates.
- Merge
- When approved code gets combined into the main branch. After merge, work moves toward QA / staging.
- Staging
- A copy of the live website where the team tests features before releasing them to real users. Looks like production. Isn't.
- Production
- The live website. Real users. Real consequences. "Shipping to production" = going live.
- Hotfix
- An emergency fix pushed to production outside the normal release cycle, usually for a critical bug or outage.
- Rollback
- Undoing a release because something broke. The faster the team can roll back, the safer they can ship.
- Feature Flag
- A toggle in the code that turns a feature on or off without re-releasing. Useful for shipping safely and rolling out to a subset of users.
Testing & Quality
- QA
- Quality Assurance. The team or process that tests the work against acceptance criteria before it ships.
- Regression
- When a new change accidentally breaks something that used to work. QA runs "regression testing" to catch these.
- Smoke Test
- A quick check that the most critical paths work after a release. "Is the site up? Can people log in? Can people check out?"
- UAT
- User Acceptance Testing. The final check — usually by the client or business stakeholder — that confirms work meets requirements before launch.
- A / B Test
- Shipping two versions of something to different user groups to see which performs better.
Engineering Mindset
- Component
- A reusable piece of UI — a button, card, banner — defined once, used in many places.
- Design System
- The library of reusable components, tokens, and patterns. Changes to design-system components ripple across every place they're used — so "small" changes can have big ripples.
- Token
- A named value used across the design system — a color, spacing unit, font size. "Update the primary color token" cascades everywhere.
- Spike
- A time-boxed research ticket. "Spend two days figuring out if this API can support our use case." Not building — investigating.
- Refactor
- Rewriting existing code to make it cleaner without changing what it does. Engineers often want to refactor. Sometimes it's worth it; sometimes it's procrastination dressed as work.
- Edge Case
- An unusual scenario the design or code might not handle — empty states, very long text, slow connections, errors.
- Happy Path
- The expected, everything-works scenario. Designs always show the happy path. Engineering has to handle everything else too.
If a term comes up you don't know — say so. "I haven't worked with that specifically — can you tell me how your team uses it?" Stronger than bluffing. Strong PMs ask. Weak PMs nod.
The Golden Sentence
If you remember nothing else, remember this. Practice it out loud until it feels natural.
It protects the deadline. It protects the engineers. It protects the product. It protects the client relationship. And it makes you sound like someone who understands delivery.
Practice Exercise
Here's a fake feature. Practice breaking it down out loud, like you would in the interview.
"The client wants a new homepage hero that changes by country, pulls content from the CMS, supports English and Spanish, includes an image, headline, body copy, CTA, analytics — and launches in two weeks."
A Strong Answer Sounds Like
"I'd first confirm what's truly needed for Release 1. I'd create stories for the CMS content model, frontend hero component, localization support, analytics tracking, and QA. I'd ask engineering whether the CMS and localization setup already exists or needs new work. I'd ask design for all responsive states and content fallbacks. If the two-week deadline is fixed, I'd ask what can safely ship first and what should be backlogged as a fast follow."
That answer hits scope, dependencies, risk, deadline, and trade-offs — without pretending to know engineering.
One-Page Cheat Sheet
The whole thing, distilled. Bookmark this. Re-read the night before the interview.
Not an acronym. A framework for delivering complex work in short cycles.
The mindset behind Scrum. Smaller releases, faster learning, adapt to change — but still disciplined.
A fixed work period where the team commits to a realistic goal.
Start-of-sprint meeting. Decides what the team will work on.
15-minute daily check-in. Three questions: done since last, doing today, blocked by what.
Cleans up future work so it's clear, small, and estimable. Also called grooming.
End-of-sprint meeting. Team shows completed work, gets feedback, captures follow-ups.
End-of-sprint meeting. What went well, what didn't, what to change. Produces action items.
One unit of work in Jira. Every story, task, bug, or sub-task is a ticket.
Visual layout of tickets as cards moving across status columns.
Big initiative. Multi-sprint. Groups related stories.
One unit of user value. Fits in one sprint. Written from user POV.
Team work. Fits in one sprint. Not user-facing.
Something broken or behaving incorrectly.
A piece of a story, task, or bug.
Rules that define when a ticket is done. Tells engineering what to build, QA what to test.
How software systems talk to each other.
Lets non-engineers manage and publish content.
CMS that stores content; the website, app, or email decides how to display it.
Adapts content and experiences for specific languages, regions, markets.
The technical setup that makes localization possible.
1, 2, 3, 5, 8, 13, 21 — relative complexity, not exact time.
Average points per sprint. Used for forecasting, not judging people.
What the team can realistically take this sprint, given vacations and meetings.
Minimum Viable Product. Smallest version that delivers real user value.
Planned work that comes shortly after launch. Not forgotten work.
Checklist for whether a ticket is ready to enter a sprint.
Checklist for whether a ticket is truly complete — not just "code pushed."
Anything stopping work. Needs an owner and an ETA, or it gets escalated.
New requirements quietly added mid-sprint without trading something out.
Shortcuts engineers took to ship fast. Real backlog work.
Proposed change → reviewed by peers → combined into main branch.
Staging = test environment. Production = live, real users.
Hotfix = emergency fix to production. Rollback = undo a release because something broke.
QA tests against AC. Regression catches what broke. UAT is final business sign-off. Smoke test is a quick "basics still work" check after release.
Components = reusable UI parts. Tokens = named values. Design system = the library that holds both.
Spike = research. Refactor = clean code. Edge case = unusual. Happy path = expected.
Strong PMs ask what can ship safely, what risks exist, what moves to next release. They don't ask engineers to magically hit a date.
"Can we separate what is required for launch from what can move to fast follow?"
"I haven't worked with that — can you tell me how your team uses it?" Strong PMs ask. Weak PMs nod.
Go get the job.