Step-by-Step GuideStep-by-Step Guides

How to Run a Win/Loss Analysis Program

A step-by-step guide to building a win/loss analysis program that produces actionable intelligence for product, sales, and marketing -- not just data.

11 min readFor PMMUpdated Apr 19, 2026

Win/loss analysis is one of the most consistently underused programs in B2B marketing. Most companies have some version of it: a CRM field where reps log a loss reason, a quarterly review where someone presents a pie chart, a vague awareness that "we lose on price" or "the product is missing Feature X."

That is not a win/loss program. A real program produces answers to specific strategic questions -- which segments we win consistently, which we lose predictably, what changed our close rate in Q2, and whether the messaging change we shipped in March is showing up in the field.

76%
of PMMs say win/loss data they receive from sales is not reliable enough to base positioning decisions onStratridge PMM survey, 2026

Step 1: Define the strategic questions you need answered

A win/loss program built without specific questions produces generic data. Before you design the interview or set up the CRM fields, write down the three to five questions the business needs answered.

Examples of strategic questions:

  • Are we winning or losing against [Competitor X] and why?
  • Is our new ICP segment (Series B SaaS) converting at the same rate as our legacy ICP (enterprise)?
  • Is the pricing change we made in Q1 showing up as a reason for loss?
  • Are deals where a specific product feature is the hook closing faster or slower?

The questions determine the interview structure, the sample criteria, and how you analyze and present results.


Step 2: Design the interview structure

Win/loss analysis done well requires interviews, not surveys. Surveys produce sanitized data -- the reason a prospect puts in a form is rarely the real reason they chose or did not choose you. Interviews surface the real reason.

Interview design principles:

  • Interview within 30 days of the decision: Memory fades and the deal's emotional residue shifts. Within 30 days, the prospect can reconstruct the decision process. Beyond 90 days, you are reconstructing a sanitized story.
  • Use a neutral interviewer: The sales rep should not run win/loss interviews on their own deals. The prospect will not tell the rep what they really thought of the rep's performance. Use PMM or an external researcher.
  • Ask about the decision process, not the product: "Walk me through how you made this decision" produces far richer data than "what did you think of our product?"
  • Interview won deals too: Most programs only interview losses. Won deals reveal what you are doing right -- and more importantly, whether you are winning for the reasons you think you are.

Step 3: Build the sample criteria and pipeline

A win/loss program is only as good as the deals it analyzes. Random sampling produces random insights. Define the criteria for which deals enter the program.

Sampling criteria to define:

  • Minimum deal size: Below a certain ACV, the investment of a 45-minute interview does not justify the return. Set a floor.
  • Deal type: Which segments, verticals, or deal types are you trying to understand? Sample strategically.
  • Win/loss ratio: Aim for roughly 60% losses, 40% wins. More wins than that and you are congratulating yourself. More losses and you are in a problem-detection mode rather than pattern-finding mode.
  • Competitive deals only: If you are primarily trying to understand competitive dynamics, filter to deals where a named competitor was evaluated.

Step 4: Analyze for patterns, not anecdotes

The interview output is raw material. The analysis is where the program produces strategic value. A single interview is an anecdote. Fifteen interviews in the same segment over two quarters is a pattern.

How to analyze win/loss data:

  1. Code the responses: After each interview, tag the key themes (pricing, feature gap, competitive positioning, sales process, timing, economic conditions). Use consistent tags so patterns accumulate across interviews.
  2. Track by cohort: Analyze by segment, by competitor, by quarter, by deal size. Patterns that appear in every cohort are structural. Patterns that appear in one cohort are specific.
  3. Separate stated vs. inferred reasons: What the prospect says and what the data implies are often different. "We went with the cheaper option" -- stated. "We lost every deal over $50K against [Competitor] in Q2" -- inferred. Both matter. Only one is actionable.
  4. Track trends over time: A single quarter of data tells you what happened. Three quarters of data tells you whether the trend is improving or deteriorating.

Step 5: Distribute findings that change decisions

A win/loss program that produces a slide deck no one acts on is a waste of everyone's time -- including the prospects who gave you 45 minutes. The distribution format must match the consumer.

Distribution by audience:

Win/loss analysis program completion checklist

    The Stratridge Dispatch

    One sharp B2B marketing read, most Thursdays.

    Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.

    More step-by-step guides