Pricing Positioning · Listicle

7 Pricing Page A/B Tests That Failed (And What Worked)

Seven pricing-page A/B tests from a single mid-market SaaS company's year-long optimization program — the ones that lost money, the one that saved it, and the pattern that explains both.

5 min read·For Founder·Updated Apr 19, 2026

A founder-led SaaS company we worked with in 2025 ran twelve pricing-page A/B tests across a year. Seven of them lost money. Four were statistical noise. One — a single test — produced a 14% lift in close rate that held across two subsequent cohorts. The reason seven lost and one won isn't in the designs; it's in what the team was optimizing for. The tests that failed optimized the pricing page in isolation. The test that worked optimized the buyer's conviction, which is a different variable.

1 of 12
is the hit rate on pricing-page A/B tests run by the median Series-B SaaS company — below the rate at which conventional CRO programs are run in other contextsStratridge analysis of three mid-market client pricing-optimization programs, 2024–2026

Below, the seven that lost, the one that worked, and the pattern.

1. Adding a fourth tier "for enterprise"

The test: add an "Enterprise" tier above three existing tiers, with "Contact us" as the CTA. Hypothesis: larger deals would self-identify. Result: close rate on mid-tier dropped 8%. Mid-market buyers interpreted the Enterprise tier as a signal that they were in the wrong category of buyer, and stalled. The deals didn't move up — they moved sideways, into "let me re-evaluate whether we're buying this right now."

The lesson: tiers communicate social identity. Adding a tier above your ICP tells your ICP they're in the small-kid section. If the Enterprise tier exists, hide it behind a "larger teams" link below the grid.

2. Annual-prepay discount from 15% to 20%

The test: increase annual-prepay discount to 20% to improve cash collection. Result: annual-to-monthly split shifted by 4 points (good), but overall close rate dropped 6% because monthly-first buyers read the larger gap as "annual is the real price and monthly is a penalty." The effective ARR per new deal dropped, not rose.

The lesson: discount gaps above 17% communicate that monthly pricing is punitive. Buyers who intended to evaluate monthly first resented the frame and opted out.

The test: promote the higher tier to "most popular." Hypothesis: the social proof would pull buyers up. Result: the middle tier's share dropped 11 points; the high tier's share rose 6 points; but overall close rate dropped 4%. The badge made the page feel less trustworthy — buyers noticed the change was optimizing for the vendor, not for them.

The lesson: buyers can tell when a badge is a nudge versus a signal. "Most popular" only works when it's true and when the page doesn't look like it's been rearranged to manufacture the badge.

4. Monthly price shown in both per-user and total form

The test: display "$29/user/mo · $290/mo for 10" instead of just "$29/user/mo." Hypothesis: the total would feel concrete and make budget conversations easier. Result: close rate dropped 7%. Buyers saw the total and reacted to it as a budget item rather than a per-head cost. Per-user pricing that requires math in the head is not a bug — it's a feature.

5. Removing the contact form in favor of self-serve checkout for the mid tier

The test: enable credit-card self-serve for the mid tier (previously sales-assisted). Hypothesis: friction was killing deals. Result: volume of checkouts rose; but the cohort had 62% higher 90-day churn than the sales-assisted cohort. The friction wasn't killing deals — it was filtering deals. The sales conversation produced the conviction that carried the customer through onboarding.

6. Highlighting "Save $X" on the annual toggle

The test: replace the annual-discount percentage with an absolute dollar figure ("Save $480/year"). Hypothesis: dollar amounts feel more tangible than percentages. Result: close rate dropped 5%. The dollar figure anchored the buyer on the discount rather than the value, and made the annual commitment feel like a coupon rather than a partnership. Some of the tested buyers volunteered "this feels like a retail pricing page."

7. Adding customer logos to the pricing page

The test: stack eight customer logos directly under the pricing grid. Hypothesis: social proof improves conversion. Result: close rate unchanged; qualitative feedback was neutral-to-mildly-negative. Multiple mid-market buyers mentioned that a wall of enterprise logos on the pricing page made them feel their company was too small. The logos belonged on the homepage, not the pricing page, where buyers are doing math and looking for reasons to disqualify.

The one that worked · Adding a decision-path above the grid

The test that produced the 14% lift was not a pricing change. It was a two-sentence decision-path module above the tier grid, explaining which tier was right for which company size. Specifically: "If you're under 20 users, start with Team. If you're 20–100 users, Business is the right fit. Over 100, Enterprise is a conversation — here's what that looks like." Then the tiers, unchanged.

The seven failed tests all moved the buyer's attention toward the pricing page's mechanics — which tier, which price, which discount. The winning test moved the buyer's attention toward their own situation. The buyer left the page with a specific tier in mind and a reason for it, rather than a calculation still in progress.

The pattern

A/B tests on pricing pages that change the mechanics — price, tier count, discount framing — almost always lose. A/B tests on pricing pages that reduce the buyer's uncertainty about which tier is right for them almost always win. The framing is not "how do we optimize the page"; it's "how do we reduce the number of open questions the buyer has when they leave." Teams running pricing tests as conversion optimization miss this because conversion optimization was built for transactional e-commerce, where the buyer already knows what they want. In B2B SaaS, the buyer often doesn't know which tier is right until someone tells them, and the highest-value test is the one that tells them without requiring a sales call.

The pricing page is not an SKU grid. It's a disambiguation surface. Tests that treat it as an SKU grid will fail seven times out of twelve; tests that treat it as a disambiguation surface have better odds. This is the only pattern worth running a year of tests to learn.

Related capability

Positioning Audit

An eight-area diagnostic of your positioning, with evidence quotes, RAG synthesis, strengths, and a prioritized action plan.

See how it works
The Stratridge Dispatch

One sharp positioning read, most Thursdays.

Field-tested frameworks, teardowns, and pattern notes from our working library. No "Top 10" lists. No launch roundups. Unsubscribe whenever.

Keep reading