Positioning Audit · Article

The Positioning Audit Score Distribution (Benchmark Data)

Audit scores from 420 B2B SaaS companies, distributed by stage and category. Where healthy lives, where the long tail hides, and the three specific patterns that correlate with win-rate outcomes 18 months later.

6 min read·For CMO·Updated Apr 19, 2026

A CMO running a positioning audit for the first time usually asks two questions. The first is "where do we stand." The second is "is that good." The first has a specific answer from the audit itself. The second, until recently, has been a shrug. The distribution below — drawn from 420 B2B SaaS companies across five stages and eight category clusters — is an attempt to make the second question answerable.

Caveats first: the sample is self-selected (companies that commissioned audits), skewed toward companies experiencing positioning pressure, and concentrated in North America. These caveats matter but do not invalidate the distribution. The rank-ordering and the relative positions of the benchmarks are reliable; the absolute scores may be slightly higher for a random-sampled population.

420
B2B SaaS audit scores collected 2023–2026, covering seed through Series D+, across 8 category clusters and 42 sub-categoriesStratridge audit-score archive, 2023–2026

The overall distribution

Audit scores run 0–100 across five equal-weighted layers (category, audience, problem, alternative, claim). The overall distribution:

The median score is 64. The mean is 62. The interquartile range is 55–74. Scores above 85 are rare and usually concentrated in specific company profiles (below).

The score range many CMOs treat as alarming — under 60 — is actually where the median lives. This is not reassuring; it means the median B2B SaaS company is operating with meaningful positioning drift. It does mean that an audit result of 58 is not an outlier, and treating it as a company-specific emergency is often a misinterpretation of what's a category-wide condition.

By company stage

The breakdown by stage reveals a specific pattern: scores are not monotonic with stage. They peak twice.

  • Seed (n=64): Median 58. Most scores between 48–68. Briefs are new, often aspirational, and lack evidence for claims. The dominant failure is Layer 5 (claim) being adjective-heavy.

  • Series A (n=94): Median 61. Slightly better than seed as the company has collected more evidence. Still weak on Layer 4 (alternative).

  • Series B (n=121): Median 68. The first peak. Company has enough data to ground claims, and the PMM function is usually in place. Layer 1 (category) is often its sharpest here.

  • Series C (n=87): Median 61. Notable decline. The company has expanded into new segments; the brief that worked at Series B is starting to feel broad. Layer 2 (audience) drift is common.

  • Series D+ (n=54): Median 67. Second peak. Companies that survived the Series C transition and explicitly repositioned. Those who didn't transition cleanly fall into the sub-50 category.

The Series C dip is the most reliable pattern in the data. Companies pass through a window where the brief they used to get to $30M is actively holding them back from reaching $100M, and the audit score reflects it.

Observation from the distribution

The three layers where drift concentrates

Scores across layers are not evenly distributed. The aggregate layer-level median scores:

Layer 4 is the weakest layer across the sample — by a substantial margin. The median brief either fails to name alternative responses (build-internally, do-nothing, adjacent competitors), or names them but does not provide honest responses. This single weakness accounts for a disproportionate share of below-median overall scores.

The implication for a CMO running an audit for the first time: if Layer 4 is the weakest layer, you're normal. Fixing Layer 4 has the highest ROI of any layer-specific remediation because most companies don't, and the differentiation of having a complete Layer 4 is easy to achieve relative to categories like Layer 1 where pivoting is structurally harder.

The 18-month outcome correlation

For a sub-sample of 180 companies where audit scores from 2023 could be cross-referenced with win-rate outcomes 18 months later, three specific patterns correlated with outcomes:

Companies scoring 70+ overall AND 60+ on every individual layer: Mean win-rate change over 18 months was +3 percentage points. No company in this group saw win rate decline more than 2 points.

Companies scoring 55–70 overall: Mean win-rate change over 18 months was +0.5 percentage points. High variance; outcome depended heavily on whether the audit was acted on.

Companies scoring under 55 overall OR under 40 on any single layer: Mean win-rate change over 18 months was -4 percentage points. The specific mechanism: companies with a collapsed single layer (often Layer 4) lost ground faster than companies with evenly moderate drift.

The key pattern: a single weak layer is more predictive of outcomes than an overall score. A company scoring 68 overall with a 35 on Layer 4 fared worse than a company scoring 62 overall with 55+ on every layer. The minimum layer score matters more than the average.

How to read your own score

Given the distribution, three practical rules for interpreting an audit result.

If the overall score is in the 60–75 range, you're in the healthy middle. Normal for a scaling SaaS company. Remediation should focus on specific weak layers, not a full rebuild.

If a single layer scores under 40, fix that layer first. Regardless of overall score. A collapsed single layer predicts the future outcome more reliably than the average. The fix is usually 2–4 weeks of focused work on that layer, not a full brief rewrite.

If the overall score is under 50, schedule the full refresh. The brief is not operating. Incremental fixes will not recover the situation within the timeframe that matters. The full refresh costs $50–150K; the cost of not running it is usually higher.

If the overall score is above 85, audit more frequently. Companies at this level have earned positioning discipline, and the next drift — when it comes — will be subtle. Semi-annual audits catch it; annual audits let it build to the point where recovery is harder.

What the distribution does not capture

The score is a quantification, and quantifications miss texture. Two specific things the score cannot capture:

First, the quality of the positioning. Two companies can both score 75 — one with a mediocre brief that hits all the layer requirements, and one with a sharp, memorable brief that hits the same requirements. The score treats them as equivalent. A human reviewer doesn't. The score is a floor, not a ceiling, and a high-scoring brief can still be unmemorable.

Second, the operational gap between brief and reality. A company with a perfect brief nobody operates from can score 85 and still have every problem a 40-scoring company has. The audit measures the brief; the audit does not measure whether the brief is operating. Running both measurements — brief-score plus brief-operating-score — is a different audit and produces different insight.

The distribution above is a starting point for interpreting an audit score, not a scorecard. The useful output of an audit is the specific findings and the remediation plan, not the number. The number helps calibrate — am I in the healthy middle or outside it — and the calibration is worth having, especially for CMOs who have never had a reference point before.

Related Stratridge Tool

Positioning Audit

Find out exactly where your positioning is losing buyers.

Run an eight-area diagnostic of your site against your own strategic intent. Stratridge reads your pages, compares them to your positioning goals, and surfaces the specific gaps costing you deals — with a prioritized action plan.

  • Eight-lens diagnostic in under two minutes
  • Evidence pulled directly from your own site
  • Prioritized action plan, not a generic checklist
Run a free Positioning Audit →
The Stratridge Dispatch

One sharp B2B marketing read, most Thursdays.

Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.

Keep reading