AI Strategist · Guide

AI Strategist Prompt Library for Positioning Work

Twenty prompts that turn an AI strategist into a useful positioning tool — organized by positioning-layer, each with the specific context the AI needs to produce non-generic output, and the reviewer discipline that keeps the output honest.

8 min read·For PMM·Updated Apr 19, 2026

The difference between an AI strategist that produces useful positioning work and one that produces generic output is almost entirely in the prompts. Vague prompts produce vague outputs; specific prompts with the right context produce outputs a senior PMM would recognize as thoughtful. Most teams never get past the vague stage — they ask "help me improve our positioning" and conclude that AI isn't useful for strategic work. The conclusion is wrong; the prompts were wrong.

The twenty prompts below are organized by the five positioning layers plus general audit and synthesis work. Each prompt includes the specific context the AI needs to produce non-generic output. Copy them, replace the bracketed context with your company's specifics, and expect substantially better output than the default asking-produces.

Layer 1 · Category prompts

Prompt 1 · Category-noun testing

We are a [brief product description] targeting [ICP]. Our current category noun is "[X]." Competitor A uses "[Y]"; competitor B uses "[Z]." Our customers, when asked to describe us to peers, have said: "[quote 1]", "[quote 2]", "[quote 3]." Given this context, what are three alternative category nouns we should consider, and what are the specific tradeoffs of each relative to our current "[X]"? Don't give me generic options; evaluate specifically against our customer language.

Prompt 2 · Category-creation viability

We are considering positioning as a new category called "[proposed name]." The existing category in this space is "[incumbent category]." List the four things that usually have to be true for a category-creation move to work (market readiness, analyst interest, customer vocabulary, competitive whitespace), evaluate our situation against each, and give an honest go/no-go recommendation.

Prompt 3 · Category reclassification risk

Here are three recent analyst reports about our category: [paste or describe]. Here is our current positioning: [paste brief]. Are we at risk of being reclassified by analysts into a different category in the next 12 months? What specific signals would confirm or disconfirm this?

Prompt 4 · Category confusion in buyer conversations

Here are five snippets from recent sales calls where buyers seemed confused about what category we're in: [paste]. Identify the specific confusion pattern, propose the two or three positioning adjustments that would address it, and predict how each adjustment would affect our current messaging.

Layer 2 · Audience prompts

Prompt 5 · ICP narrowing

Our stated ICP is "[broad description]." Here is data from our last 50 closed-won deals: [company size, role, industry, typical use case]. Based on this evidence, what is the narrower ICP that our data actually supports? How does the narrower ICP differ from our stated ICP, specifically, and what are the implications for our homepage and pricing-page positioning?

Prompt 6 · Persona-specific messaging

Our primary ICP is [specific description]. Within that ICP, there are two buyer personas we sell to: [persona 1] and [persona 2]. Write three messaging variants for each persona — one emphasizing [angle 1], one [angle 2], one [angle 3]. For each variant, explain why it would resonate with that specific persona and not the other.

Prompt 7 · ICP expansion candidates

Our current ICP is [description]. Based on our product capabilities described here [paste], which adjacent ICPs would we be credible serving with minimal product work? Name three, evaluate each for market size and competitive density, and recommend which (if any) is worth pursuing next.

Layer 3 · Problem prompts

Prompt 8 · Problem-statement sharpness

Here is our current problem statement: [paste]. Rewrite it three ways — once more specific to a moment the buyer notices the problem, once more specific to a cost the buyer incurs from the problem, once more specific to a trigger event that forces the buyer to act. Evaluate which version would land best with our ICP.

Prompt 9 · Problem validation from customer language

Here are ten customer quotes about why they started looking for a solution: [paste]. Our stated problem is "[X]." What specific gap exists between our stated problem and the problem language our customers use? Is the gap cosmetic or material?

Prompt 10 · Problem urgency audit

Our category's problem is "[X]." Based on the current market context and any relevant trends, is the buyer's urgency around this problem increasing, stable, or declining? What would make urgency shift, and how would that affect our positioning?

Layer 4 · Alternative prompts

Prompt 11 · Competitive-set audit

Our positioning brief names [Competitor A, Competitor B, Competitor C] as our primary competitors. Based on these actual closed-lost reasons from our last 30 losses: [paste data], is this competitive set correct? Who should be added, who should be removed, and what's the evidence?

Prompt 12 · Build-internally response

Some of our losses are to buyers who decided to build this capability internally rather than buy. Our current response to the build-it-yourself alternative is "[paste]." Given the actual cost and complexity of building [this capability] (you can reason about this), what would a more honest response be? What's the tradeoff we should concede?

Prompt 13 · Emerging-competitor assessment

[Competitor X] raised a $[amount] Series B led by [investor]. Their narrative in the announcement was "[quote]." Their careers page is hiring [pattern]. What is this competitor likely to do in the next 9 months, and how should our positioning prepare?

Layer 5 · Claim prompts

Prompt 14 · Claim falsifiability

Our current Layer 5 claim is "[paste]." Is this claim falsifiable? If yes, what specific evidence would prove or disprove it? If no, how would we rewrite it to be falsifiable without making it less ambitious?

Prompt 15 · Claim-evidence audit

Here is our current claim: "[paste]." Here is the customer-outcome data we have: [paste metrics, named customers, time ranges]. Does our claim match our evidence? Is the evidence stronger than the claim (under-claiming) or weaker than the claim (over-claiming)?

Prompt 16 · Claim ownability

Here is our claim: "[paste]." If I substituted [Competitor A]'s name for ours in this claim, would it still be plausible? If yes, the claim isn't specific enough. Rewrite the claim so a competitor substitution would break it.

General audit and synthesis prompts

Prompt 17 · Brief-to-surface audit

Here is our positioning brief: [paste]. Here is our homepage hero copy: [paste]. Here is the first slide of our sales deck: [paste]. Here is our pricing-page framing: [paste]. Score each surface against the brief on a 0–10 scale for category-noun match, ICP match, claim match. Explain each score with specific evidence.

Prompt 18 · Win/loss pattern synthesis

Here are 10 win/loss interview summaries: [paste]. What patterns emerge across these interviews that aren't visible in any single one? What do the patterns suggest about our positioning?

Prompt 19 · Competitive-response drafting

Our competitor [X] just announced [specific event]. Our positioning brief is [paste]. Draft a 200-word internal memo predicting the operational changes this will produce at the competitor over the next 9 months, and recommend our positioning response.

Prompt 20 · Quarterly strategic-context synthesis

Here are the key strategic events from this quarter: [decisions made, competitive moves, customer patterns]. Write a 2-page strategic-context memo from the CMO to the board, structured as: what's shifting, what we're carrying as open questions, what decisions we've made.

The reviewer discipline

The prompts above produce drafts, not finished work. A reviewer discipline is mandatory:

The five-point review before using any AI-produced positioning output

    The prompts are the tool; the reviewer discipline is what makes the tool useful rather than harmful. Teams using the prompts without the review process produce positioning work that reads as competent but generic; teams with the review process produce work that's specific and sharp. The difference isn't in the AI; it's in the human-in-the-loop that calibrates the output.

    Related Stratridge Tool

    Analyst

    AI strategy advice grounded in your own context — not generic playbooks.

    The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.

    • Reads your own positioning data before responding
    • Grounded in audit findings and competitor signals
    • No hallucinated advice — evidence cited inline
    Ask the Analyst →
    The Stratridge Dispatch

    One sharp B2B marketing read, most Thursdays.

    Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.

    Keep reading