Win/Loss Analysis · Article

Win/Loss Review for Product-Led Growth Companies

The classic win/loss interview misses 80% of the signal in a PLG motion. Here's why, and the four-source method that replaces it.

5 min read·For CMO·Updated Apr 19, 2026

A classical win/loss program interviews the buyer after the deal closes or dies. In a sales-led motion, that's the right instrument — the buyer is a specific, named person who made a specific, named decision, and they can describe it coherently. In a PLG motion, the classical interview misses most of the decision because the decision wasn't made by one person at one time. It was made by three users at three different moments, most of whom never talked to anyone at your company.

68%
of PLG signups that converted to paid customers had decision-relevant interactions with at least three people inside the buying company, only one of whom engaged with a sales-led touchpointStratridge PLG funnel analysis, 2025, n=27 companies

The traditional win/loss interview speaks to that one person. It misses the other two entirely. A PLG-specific method has to reach the full set.

Why classical win/loss underperforms in PLG

Three specific gaps show up when classical win/loss is applied to PLG.

First, the signer is usually not the champion. In PLG, the signer is a finance or procurement person who inherited the decision after the champion (a practitioner-user inside the company) built enough internal momentum to justify the purchase. Interviewing the signer captures the finance dimension of the decision — pricing, vendor risk, renewal terms — but misses the practitioner dimension, which is where the product differentiation actually plays out.

Second, most "losses" don't close. In a sales-led motion, a loss has a closing event: the buyer picks someone else, signs the other contract, and your CRM records a closed-lost. In PLG, a loss is often a trial that went dormant, a team workspace that stopped being used, or an upgrade conversation that got tabled for a quarter. There's no closing event to trigger a win/loss interview, and the signal goes unrecorded.

Third, the competitive set is invisible. The PLG user often evaluates two or three tools in parallel by trialing them all. The sales-led buyer typically evaluates through vendor presentations and discovery calls, which leave a paper trail. The PLG user leaves a browsing history you can't see. Asking "what else did you consider" produces incomplete answers because the user often tried four tools and only remembers two.

The four-source method

A win/loss program designed for PLG has to triangulate from four sources, each filling a gap the others leave.

Source 1 · In-product dormancy interviews

Any trial or free-tier workspace that goes dormant for 14 days triggers a lightweight in-product survey: three questions, no more. "What were you hoping to do when you signed up?" "What stopped you from continuing?" "Did you end up using something else — and if so, what?"

This captures losses that don't close. Response rates are low (typically 8–15%) but the volume is high enough that even a 10% response rate produces more signal per quarter than a classical win/loss program can generate. And the responses are fresh — the user is still close enough to the experience to remember it specifically.

Source 2 · Champion interviews at 60-day mark

The champion — the internal practitioner who originally brought the product into the company — is the most important interview subject in PLG. A 60-day-post-conversion interview with the champion captures the product-level signal (what features sold them, what friction almost killed the adoption, what they tell peers about the product) that the signer cannot articulate.

The logistics: the CS team or a dedicated win/loss lead schedules a 25-minute call with the champion, not the signer, 60 days after paid conversion. Five questions. Transcribe the call. Do not skip the champion in favor of the signer; the signer's interview is worth 20% of what the champion's is worth.

Our classical win/loss program was interviewing the VP of Engineering who signed the ELA. We thought we had a pricing problem. When we started interviewing the actual developers who drove adoption, we discovered we had a trial-experience problem — the developers almost didn't get their team to buy because our onboarding for the third user was broken. Different problem, different fix.

Aisha KumarHead of Growth, developer-tools PLG company

Source 3 · Signer interviews, with different questions

The signer still matters — they control the check — but the questions for them are different from the classical set. Not "why did you pick us." That's a question the signer cannot answer specifically, because the decision was driven by the champion. The right questions are: "What made this an easy approval?" "What would have made this a harder approval?" "What vendor-risk questions did you have to answer internally?" "If the champion left tomorrow, what would happen to this contract?"

These questions produce answers the signer actually knows. They surface the organizational friction — renewal risk, vendor-consolidation pressure, compliance requirements — that the champion cannot see.

Source 4 · Usage-pattern forensics

The fourth source is data, not interviews. For every won deal, the win/loss lead pulls the usage data from the trial period and the first 60 days post-conversion. Which features were used first? Which were adopted in week 2? Which never got touched? The pattern across 20 won deals reveals the product's actual activation path — which is often different from the one the onboarding flow is designed around.

For lost deals — dormant trials — the same analysis reveals the friction point. A consistent drop-off between "created account" and "invited second user" means the team-invite flow is blocking conversions. A drop-off between "ran first query" and "ran second query" means the query experience doesn't encourage return. This is signal that no interview will surface.

Synthesis and routing

The four sources produce qualitatively different data. The synthesis is not a single report — it's a layered one, with champion interviews anchoring the product narrative, signer interviews anchoring the organizational narrative, in-product dormancy surveys anchoring the friction narrative, and usage data anchoring the pattern narrative.

A good PLG win/loss report runs 8–12 pages, not 3. It has sections owned by different functions — product reads the usage-pattern section, CS reads the signer-risk section, marketing reads the dormancy-survey section, growth reads the champion section. This is more work than the classical single-interview-per-deal program, but it's the only version that produces insight that actually shapes the PLG motion. The alternative — applying sales-led win/loss to a PLG funnel — will keep producing reports that the organization reads politely and ignores.

Related capability

Win/Loss Review

Turns lost-deal notes into objection patterns and rebuttals, then writes the learning to Strategic Context.

See how it works
The Stratridge Dispatch

One sharp positioning read, most Thursdays.

Field-tested frameworks, teardowns, and pattern notes from our working library. No "Top 10" lists. No launch roundups. Unsubscribe whenever.

Keep reading