Win/Loss Analysis · Guide

Win/Loss Analysis for Self-Serve (PLG) Funnels

Self-serve conversions don't have a sales conversation to analyze. The four data sources and the two interview types that replace the traditional win/loss interview in PLG funnels.

9 min read·For CMO·Updated Apr 19, 2026

Traditional win/loss analysis depends on sales conversations. A prospect evaluates, talks to a rep, maybe gets a demo, makes a decision. The rep's notes, the prospect's questions, and the post-decision interview together produce the data. In a self-serve PLG funnel, none of that exists. The prospect signs up, uses the product for some number of days, and either converts to paid or doesn't. There's no rep, no demo, no evaluation conversation. The traditional interview has no analog.

The four data sources below — combined with two specific interview types — replace the traditional win/loss conversation for self-serve funnels. The resulting analysis is different from sales-led win/loss, and in some dimensions better: the behavior data is unambiguous, the sample sizes are large, the signal is continuous. In others it's worse: you can't ask why someone did what they did at the moment of decision.

We stopped calling our signup-to-paid analysis 'win/loss' because it was misleading people into expecting sales-led methodology. We call it 'funnel outcome analysis.' The naming change helped the team treat it as its own craft instead of a watered-down version of traditional win/loss.

Head of Growth, PLG developer-tools SaaS

The four data sources

Source 1 · In-product behavior data

The richest PLG data source: what users do inside the product. Which features they use, which onboarding steps they complete, which moments they drop off at, how long they spend in specific workflows.

This data is unambiguous and continuous. It captures actual behavior, not self-reported behavior. For PLG funnel analysis, it's the closest thing to ground truth available.

What it reveals: Where users get stuck. Which features activated paying users did differently from non-paying users. Which onboarding steps predict conversion. Which usage patterns predict churn.

What it misses: Why users did what they did. Behavior data tells you what happened; it doesn't tell you the reasoning. A user who drops off at step 4 might be bored, confused, or called to a meeting. The data can't distinguish.

Source 2 · Signup-survey data

A short survey at signup or during early onboarding. Usually 3–5 questions: how did you hear about us, what are you hoping to do, what are you comparing us against.

The useful signup survey questions

    Keep the survey short. Five questions is the ceiling; three is better. Surveys that ask more than this have lower completion rates and worse data.

    Source 3 · Dormancy outreach responses

    Users who sign up and then go inactive are the PLG equivalent of lost deals. Reaching out to them 14 days after last activity with a brief survey produces data that the behavior data can't.

    The outreach is low-touch: a single email with 2–3 specific questions. "What were you hoping to accomplish when you signed up?" "What stopped you from continuing?" "Did you end up using something else — and if so, what?"

    Response rates are low (typically 5–12% depending on the email's framing), but the sample volume is high enough (most PLG funnels have hundreds of dormant users per month) that even 5% produces meaningful data.

    Source 4 · Conversion-event qualitative notes

    At the moment of conversion (paid signup, upgrade, or annual commitment), a one-field survey: "What made you decide to do this now?"

    This question is uniquely valuable. Users are converting for specific reasons, and the reasons are fresh. Over months, patterns emerge — which features triggered upgrades, which usage moments preceded conversion, which external events (end of month, end of quarter) prompted commitment.

    The conversion-moment capture has the highest signal-to-noise of any qualitative source in PLG. Invest in making the prompt clean and the field prominent.

    The two interview types

    Behavior data and surveys tell you what happened. Two specific interview types fill in the why.

    Interview type 1 · The dormancy interview

    Users who went inactive 30–60 days ago, who didn't respond to the dormancy email, and who fit your ICP. The interview is 20 minutes, scheduled with a small incentive ($50 gift card is standard). The questions mirror the churn interview in some ways: what were you hoping for, what stopped you, what did you try instead.

    The dormancy interview surfaces friction that the behavior data hints at but can't explain. Users who dropped off at onboarding step 4 — dormancy interviews reveal why. Maybe the step was unclear; maybe the user realized the product wasn't for them; maybe they ran out of time that day and never came back.

    Interview type 2 · The paying-customer activation interview

    The opposite of dormancy: users who did convert, interviewed 30–60 days after conversion. What made them convert? What moment did the product click for them? What did they tell colleagues?

    This interview type is underused. Most PLG companies interview non-converters; few interview converters to understand the conversion pattern. The paying-customer interviews reveal which product moments actually drove conversion — data that's invaluable for optimizing the onboarding flow.

    The synthesis approach

    Combining the four data sources and the two interview types produces a richer analysis than any single source provides. The synthesis structure:

    Quantitative findings from behavior data + surveys: Which steps predict conversion, what channels produce the best ICP fit, which features correlate with paid signup.

    Qualitative findings from interviews: Why users get stuck at specific steps, why users convert at specific moments, what users don't say in surveys but do say in conversations.

    The pattern across both: Where the quantitative and qualitative agree, you have high-confidence findings. Where they disagree, you have questions worth investigating further.

    A typical monthly PLG funnel-outcome analysis produces 3–5 high-confidence findings, 2–3 patterns worth monitoring, and 1–2 open questions. This is roughly the same output as a traditional win/loss analysis, delivered through different methodology.

    The operational cadence

    The analysis is monthly, not quarterly. PLG funnels have enough continuous data flow that monthly synthesis captures fresh patterns; quarterly is too slow.

    Monthly cadence, 4 hours of analyst time:

    • Week 1: Pull behavior data, signup surveys, and conversion-event notes for the prior 30 days. 60 minutes.
    • Week 2: Conduct 3 dormancy interviews and 1 paying-customer interview. 90 minutes of interview time plus scheduling.
    • Week 3: Synthesize findings across sources. 60 minutes.
    • Week 4: Write the one-page monthly note and route to relevant teams. 30 minutes.

    Total: 4 hours per month. Less than most traditional win/loss programs and with arguably richer data.

    The biggest mistake PLG teams make

    Using only behavior data. Behavior data is rich and unambiguous, so it's tempting to skip the qualitative sources. This produces analyses that tell you what is happening but not why — which means the remediation work is guessing at causes.

    A specific example: behavior data shows a drop-off at onboarding step 4. The PLG team responds by A/B testing new copy for step 4. The copy doesn't move the drop-off. More tests; still no movement. Eventually someone runs a dormancy interview and discovers the real issue: step 4 requires data the user doesn't have yet, and they leave to find the data and forget to come back. The fix isn't new copy; it's moving step 4 later in the onboarding. Behavior data alone would never have surfaced that diagnosis.

    The discipline: behavior data tells you where; qualitative data tells you why. Both are needed. PLG teams that run both produce remediation work that actually moves metrics; teams that skip qualitative run A/B tests that don't.

    Related Stratridge Tool

    Win/Loss Review

    Turn every lost deal into something your team can actually act on.

    Win/Loss Review takes your lost-deal notes and turns them into objection patterns, rebuttal suggestions, and positioning gaps — then writes the learning back to Strategic Context so the next deal benefits from it.

    • Surfaces patterns across lost deals, not one-off anecdotes
    • Generates rebuttal suggestions from real objections
    • Feeds findings back into your strategic memory
    Analyze your losses →
    The Stratridge Dispatch

    One sharp B2B marketing read, most Thursdays.

    Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.

    Keep reading