Win/Loss Analysis · Guide

Win/Loss Analysis Without a Dedicated Program

A full win/loss program costs $80–150K a year. Here's the lightweight version that a team without that budget can run in four hours a month — with the three questions that do most of the work.

9 min read·For all readers·Updated Apr 19, 2026

A dedicated win/loss program — a contractor conducting 40-minute structured interviews, a quarterly synthesis report, an analytics platform for tagging — costs a B2B SaaS company somewhere between $80,000 and $150,000 per year. Most companies below Series B cannot justify that line item, and most at Series B are spending it on something else. The result is that win/loss analysis at these companies is either non-existent or is run by the CRO as a Friday-afternoon exercise that produces nothing actionable.

The lightweight alternative is better than nothing, which is the real choice. It takes four hours a month, produces roughly 60% of the signal of a dedicated program, and requires no incremental vendor. The structure below is what we've seen work at roughly a dozen sub-$30M SaaS companies.

The scope this replaces

A full program does three things: (1) conducts structured interviews with close-won, close-lost, and churned customers; (2) produces a synthesis report that ties findings to product, pricing, and positioning; (3) routes findings to the functional teams that can act on them. The lightweight version does all three at reduced depth. It will miss some signal. It will still produce more insight than the median company has.

What it cannot replace: longitudinal analysis (cohort tracking over multiple quarters), deep qualitative exploration of a specific hypothesis, or independent-third-party legitimacy for customer conversations (some buyers speak more freely to a contractor than to a vendor employee). If any of these is critical, the lightweight version is insufficient.

The three questions

The lightweight program uses three questions, verbatim, on every call. The questions were chosen to produce the largest answer-space per minute of call time, and to resist the rep's natural instinct to ask leading questions.

Question 1: "Walk me through the moment you realized you were going to pick us (or not pick us)." Open, narrative, asks for a specific moment rather than a general retrospective. The answer reveals what actually tipped the decision — which is rarely what the buyer answered on the demo call.

Question 2: "What did you think was true about us that wasn't, or wasn't true that was?" Produces perception gaps. This is the question that routes most reliably to positioning and messaging changes.

Question 3: "If you had to brief a peer at another company about us in 30 seconds, what would you say?" Reveals how the buyer actually describes the product to someone else — which is the test of whether the positioning stuck.

These three questions, verbatim, produce roughly 25 minutes of conversation. With five minutes of rapport-building at the start and five minutes of wrap-up, the call fits inside 35 minutes. Schedule it for 45 to allow drift.

The monthly rhythm

    Four calls at 45 minutes each = 3 hours. Scheduling and synthesis add roughly 1 hour. Total: 4 hours a month, or one afternoon. The discipline is that the afternoon is on the calendar every month, not "when we get to it."

    Who runs it

    The lightweight program has exactly one owner, and the choice of owner matters more than the program design. The right owner is either a senior PMM (best case) or the head of sales enablement. The wrong owners are: the CRO (too busy, too politically complex), the CS team (too close to the customer relationship to ask disqualifying questions), and the founder (the customer will tell the founder what the founder wants to hear).

    The owner needs two characteristics: enough organizational credibility that product and sales will read their monthly note, and enough operational proximity to the product that the findings are interpretable. A PMM who understands the roadmap and the sales process is the best single choice.

    The three things you skip

    The lightweight version deliberately skips three things a full program would do. Each is a real trade-off.

    No independent-third-party interviews. The buyer is talking to a company employee, not a neutral contractor. Some signal will be softened. The workaround: use the three-question script verbatim, resist the instinct to defend during the call, and transcribe rather than paraphrase. Most of the signal is recoverable with discipline.

    No churn-cohort analysis. The monthly report doesn't track how findings evolve over multiple quarters. The workaround: the quarterly review (see below) looks at the trailing three monthly reports together. It's rougher than a dedicated platform's longitudinal analysis, but it catches shifts that single-month data misses.

    No competitive-win-rate modeling. A full program can attribute win rate against specific competitors based on interview and CRM data. The lightweight version tags the competitor mentioned in each interview and produces a directional count, not a statistical model. For companies below $30M ARR, the directional count is usually sufficient.

    The monthly one-page note

    The output of the program is a one-page note. Not a deck. Not a report. A page, delivered by email, with three sections.

    The monthly note structure

      The one-page format is the discipline. A three-page memo or twenty-slide deck signals that the program is performative; a one-page note signals that the findings are actionable. Teams read one pages; they skim decks.

      Quarterly review

      Once a quarter, the owner spends an additional two hours on a longitudinal review. Read the three monthly notes together. Look for findings that repeated, findings that shifted, and findings that disappeared. Write a four-paragraph quarterly summary — what changed, what didn't, what the org should do differently next quarter.

      This is the analytical step that turns the monthly notes into strategic signal. Without it, the program produces a stream of tactical observations without a cumulative story. With it, the program starts to function like a compressed version of what a dedicated program's synthesis report would produce.

      When to graduate

      The lightweight program works well at the 10–30 customer close-won per quarter scale. Below that, four interviews a month drains the customer pool too fast; the program is better served by quarterly rather than monthly rhythm. Above that, the four-interview sample misses too much signal; the program needs to scale to either a dedicated internal role or a contractor.

      The graduation signal is usually a specific finding that the team agrees needs deeper exploration than the lightweight program can support. "We keep hearing about the integration gap in these interviews, but we can't tell how widespread it is or how it affects win rate" — that is the moment to hire a contractor for a scoped deeper analysis on that finding, or to hire a full-time win/loss lead.

      Most sub-Series B companies will not hit the graduation threshold during the company's first three years of running the lightweight program. The program's value compounds slowly, and the monthly rhythm — even at reduced depth — produces roughly 60–70% of what a dedicated program would deliver, at 5% of the cost. The teams that run it are rarely the ones who regret the budget allocation. The teams that don't are often the ones who end up losing deals on patterns their win/loss data would have caught if it existed.

      Related capability

      Win/Loss Review

      Turns lost-deal notes into objection patterns and rebuttals, then writes the learning to Strategic Context.

      See how it works
      The Stratridge Dispatch

      One sharp positioning read, most Thursdays.

      Field-tested frameworks, teardowns, and pattern notes from our working library. No "Top 10" lists. No launch roundups. Unsubscribe whenever.

      Keep reading