Positioning Audit · Guide

The Complete Positioning Audit Framework (2026 Edition)

A repeatable audit for how clearly your positioning lands — the eight lenses, the scoring rubric, and the reason most internal audits confirm what leadership already wanted to hear.

13 min read·For CMO·Updated Apr 19, 2026

Most positioning audits fail for the same reason most annual budgets fail — nobody runs them the same way twice. The board asks "is our positioning working," the CMO assembles a deck, the deck is reviewed, the deck is filed. Six months later the question comes up again and nobody remembers what was actually concluded, whether the next audit scored better or worse, or whether any of the recommendations shipped. The exercise performs rigor without producing it.

A working positioning audit is a repeatable diagnostic, not a one-off presentation. It scores the same eight things every time, produces the same artifact every time, and answers one question a quarter: is our positioning clearer, sharper, and more defensible than it was ninety days ago, or has it drifted?

71%
of positioning audits we've reviewed produce no measurable change in the positioning brief within ninety days of being deliveredStratridge review of 48 internal audits, 2026

Why most audits don't move the needle

Before the framework, the failure modes. In our client work these are the patterns that burn hours without producing change:

  • The audit is scoped as a deliverable, not a diagnostic. The team treats "the audit" as a slide deck due in three weeks. The pressure is to ship the deck. Scoring, comparison, and root cause are skipped in favor of narrative polish.
  • Scoring is narrative, not numeric. "Our positioning is strong in X and weak in Y" is a description. "Our category-noun clarity scores a 4 of 10, down from a 6 last quarter" is a measurement. Without numbers, the next audit can't compare.
  • The audit is run by the team that wrote the positioning. Self-audits inherit the original biases. PMM graded their own homework and found it passing. A twenty-minute call with one lost-deal AE would have contradicted it.
  • Recommendations aren't time-boxed. "We should sharpen our differentiation statement" is a wish. "Ship a revised differentiation statement on the pricing page by May 15" is a commitment. The former produces meetings; the latter produces shipped changes.
  • The loop is open. No one owns the follow-up audit. The recommendations don't have a due date, and the next review is triggered by a new crisis, not a calendar.

Fix those five and the audit becomes an operating discipline. Keep them and it stays a ritual.

A positioning audit is a diagnostic, not a deliverable.

The eight lenses

Every piece of positioning work passes or fails on eight dimensions. The lenses below are the ones we use in Stratridge's own audit product — not because they're canonical, but because they're the ones that correlate with shipped changes when we track what teams actually fix.

    Each lens gets scored 1–10 by the auditor, with a sentence of evidence attached to the score. No lens is scored without evidence; a 7 without a quote, a number, or a specific artifact reference isn't a 7, it's a feeling.

    The scoring rubric

    One rubric, eight lenses, same values every audit:

    • 1–3 — Failing. The lens is either absent or actively working against you. Example: no named category noun anywhere in the top-of-funnel surfaces.
    • 4–6 — Present but soft. The lens exists but is inconsistent, underproven, or contradicted somewhere in the stack. Example: a category noun on the homepage that's absent from the pitch deck.
    • 7–8 — Working. The lens is consistent, evidenced, and defensible against a sharp competitor. Example: category noun is consistent across homepage, pricing, deck, and CEO podcast; supported by two named customer outcomes.
    • 9–10 — Canonical. The lens is not just working — it's shaping how the market talks about you. Example: prospects use your category noun in first-call discovery, unprompted.

    Most early-stage teams score in the 3–5 range across the board. Mature teams score 6–8 on most lenses and 3–4 on one or two where attention has lapsed. A team scoring 9+ on more than two lenses is either being audited by a friend or genuinely exceptional — check the evidence.

    Running the audit: a two-week sequence

    The audit is not a one-session exercise. It spans two weeks of elapsed time, most of which is waiting on inputs. The calendar matters because it forces the auditor to actually pull the artifacts rather than describe them from memory.

      The sequence is deliberate. Days 1–7 are pulling and scoring from artifacts — what the company says. Days 8–10 are triangulation — what reps, customers, and prospects actually hear. The delta between the two is the audit.

      The prerequisites checklist

      Before running the audit, confirm the inputs exist. Most audits stall on day three because a team realizes their positioning brief is either missing, three versions deep in Drive, or out of sync with what PMM has been presenting. Fix that first or the audit scores are noise.

      Prerequisites before day 0

        What to do with a low score

        Low scores are the point. An audit that returns all sevens and eights is either flattering or inaccurate — probably both. The mechanism is: low score on a lens triggers one specific change to one specific surface, shipped in the next thirty days.

        • Low category noun clarity — rewrite the homepage hero and the pricing page's top headline to use the same single noun. Ship within fourteen days.
        • Low proof density — commission one new case-study interview in the next thirty days, add two quantified outcomes to the homepage within forty-five days.
        • Low message consistency — run a single-afternoon rewrite pass on the pitch deck and the sales-onboarding deck. The drift is usually two or three slides.
        • Low pricing-signal match — audit the pricing page against the positioning brief with the PMM and head of sales in the same room for one hour. Most discrepancies surface in that hour.
        • Low update cadence — put a quarterly calendar entry on the PMM lead's calendar for "positioning brief review" and on the head of sales's calendar for "battle card refresh." The single discipline fixes the structural problem.

        The recommendations are boring on purpose. Positioning work doesn't require new frameworks — it requires a working loop between the scored artifact and the next shipped surface.

        The artifact the audit produces

        Every run ships one document. Same structure, same length, same sections. It's not a slide deck.

        • Page 1 — the eight scores, a one-sentence evidence line per score, and a single-sentence summary.
        • Page 2 — the three ranked recommendations, each with surface, owner, and due date.
        • Page 3 — the three triangulation interview notes, quoted, with the drift delta from the written positioning called out.
        • Page 4 — comparison to the previous audit, same eight scores. If this is the first audit, the page is blank and labeled as such.

        Four pages. The PMM, the CMO, the head of sales, and the CEO can each read it in under seven minutes. The board sees the page-1 summary and the page-4 comparison and no more. Density and brevity are the point — an audit that requires a thirty-slide deck to communicate will get filed before it gets read.

        What changes after three audits

        The first audit produces baseline scores, a short list of recommendations, and a realization that the team has been operating on looser positioning than it thought. The second audit, ninety days later, tests whether the recommendations shipped. The third audit starts to surface second-order drift — the category noun held, but the proof density slipped because the case-study pipeline dried up; the pricing page scores held, but the CEO's latest conference talk introduced a new frame that hasn't made it back to the website.

        By the third audit, the CMO has a dashboard. Eight scores, four quarters of trajectory, three open recommendations at any given time. The positioning function becomes legible to the board — not because the deck improved, but because there's a number to point at.

        That legibility is the real output. Positioning work is notoriously hard to defend inside companies that run on metrics; the audit is the instrument that turns it into a metric without turning it into a vanity one.

        What this looks like in Stratridge

        Stratridge's Positioning Audit runs the eight-lens scoring automatically against your own site, pricing page, and most recent public materials, and produces the four-page artifact described above — scored, evidenced, and compared against the prior run. The triangulation interviews still happen in human voices; the audit stack does the scoring, the comparison, and the recommendation ranking. The question to ask is not whether you need an audit; every positioning function needs one. The question is whether you want to run it four times a year by hand.

        Related capability

        Positioning Audit

        An eight-area diagnostic of your positioning, with evidence quotes, RAG synthesis, strengths, and a prioritized action plan.

        See how it works
        The Stratridge Dispatch

        One sharp positioning read, most Thursdays.

        Field-tested frameworks, teardowns, and pattern notes from our working library. No "Top 10" lists. No launch roundups. Unsubscribe whenever.

        Keep reading