AI Strategist · Guide

The Strategist's Prompt Book: 50 Questions for Your AI Strategist

Fifty specific, context-loaded questions organized by strategic problem. Each prompt is designed to produce output a senior PMM would find useful, not generic text. Use as a reference; replace bracketed elements with your specifics.

13 min read·For PMM·Updated Apr 19, 2026

The 24-prompt library on the Stratridge hub covers the core positioning workflows. This reference extends it to 50 prompts organized by a broader set of strategic problems. Each prompt is designed to produce specific, non-generic output. The pattern across all of them: load the prompt with your actual specifics — current language, customer quotes, competitor context, named alternatives — and the AI produces work worth using. Load the prompt with vague requests and the AI produces average output.

Category and market prompts (prompts 1–8)

1 · Category-fit validation

We operate in the "[X]" category. Here are 10 recent customer conversations: [paste quotes]. Do the conversations confirm we're in "[X]" or suggest we're being categorized differently? Specifically, which noun do customers use most often, and is that noun different from our canonical?

2 · Category-creation stress test

We want to position as a new category called "[Y]." Here's our evidence for this category's existence: [paste]. Stress-test the evidence. What's missing for this category claim to be credible to analysts, journalists, and sophisticated buyers?

3 · Adjacent-category opportunity scan

We're in category "[X]." Based on our product capabilities described here [paste], name three adjacent categories we could credibly serve, evaluate the competitive density in each, and recommend the most promising entry.

4 · Category evolution prediction

Our category has seen these specific shifts in the last 24 months: [paste]. Based on these patterns, what's the most likely next structural shift in this category, and when? What positioning would best anticipate it?

5 · Category-consolidation threat

Our category has [X] competitors. The top three have [Y%] market share combined. Historical patterns in similar categories suggest consolidation typically happens at what point, and what positioning helps a company survive category consolidation?

6 · Market-size defensibility

We claim TAM of $[X]. Here's the calculation: [paste]. Stress-test this TAM claim. Where are the weakest assumptions, and what would a sophisticated investor push back on?

7 · Sub-category focus

Our broader category is "[X]." We serve a specific sub-category: "[Y]." Should we position primarily on the broader category or the narrower sub-category? What are the tradeoffs of each choice over a 24-month horizon?

8 · Category-noun variant testing

Our current category noun is "[current]." Test three alternatives: "[option 1]," "[option 2]," "[option 3]." For each, evaluate against these criteria: buyer recognition, competitive density, analyst coverage, search volume. Recommend the best choice with rationale.

ICP and audience prompts (prompts 9–15)

9 · ICP narrowing from evidence

Our stated ICP: "[broad]." Here's data from our last 50 closed-won deals: [metrics]. Based on the actual winning pattern, what is the narrower ICP our evidence supports? What are the tradeoffs of narrowing?

10 · Multi-persona deal analysis

Our typical enterprise deal involves these stakeholder types: [paste]. For each stakeholder, what concern dominates their evaluation, and how does our positioning address it today? Which stakeholder's concerns are we weakest on?

11 · ICP-language audit

Here's how our homepage describes our customer: "[paste]." Here's how our customers describe themselves: [paste 5 quotes]. Where do the two descriptions diverge, and does the divergence matter?

12 · Adjacent-ICP expansion

Our core ICP is [specific]. Based on our product capabilities [paste], which adjacent ICP would we be credible serving next? What positioning adjustment would the expansion require?

13 · Anti-persona identification

For our product, who is the anti-persona — the buyer who looks similar to our ICP but shouldn't buy from us? What signals would help us disqualify them early?

14 · Buyer-maturity segmentation

Our ICP includes buyers at different maturity levels with this problem: [paste problem]. Segment our ICP into mature, mid-maturity, and early-stage buyers. How should our positioning differ across the three?

15 · ICP-journey mapping

For a typical ICP buyer: what is the specific trigger event that causes them to start looking for our solution? What do they evaluate first? When does price become the discussion? Map the journey to our content and sales-enablement gaps.

Problem and claim prompts (prompts 16–24)

16 · Problem-statement sharpness

Our current problem statement: "[paste]." Rewrite three ways: more specific to a moment, more specific to a cost, more specific to a trigger. Evaluate each.

17 · Problem-evidence audit

We claim customers face this problem: "[paste]." Here's customer conversation data: [paste 10 quotes]. Does the data validate our problem framing, or does it suggest a different problem is the real one?

18 · Urgency amplification

Our category's problem is urgent when [specific conditions]. How do we make the urgency legible to buyers who haven't yet felt it? What content formats work best for this?

19 · Claim falsifiability test

Our Layer 5 claim: "[paste]." What specific evidence would prove or disprove this claim? If we have the evidence, cite it; if not, suggest what we'd need to collect.

20 · Claim-vs-evidence audit

Our claim: "[paste]." Our evidence: [paste — customer outcomes, benchmarks, named customers]. Are we under-claiming or over-claiming? How should the claim be adjusted to match what the evidence supports?

21 · Claim-ownership test

Our claim: "[paste]." If I substituted [Competitor A]'s name for ours, would it still work? If yes, the claim isn't specific enough. Rewrite to make it ownable.

22 · Claim-specificity upgrade

Our claim uses adjectives like "[paste]." Replace each adjective with a specific number, time, or falsifiable fact we could support. Produce the upgraded version.

23 · Multi-layer claim decomposition

Our top claim "[paste]" is actually composed of which specific sub-claims? Decompose them, then identify which sub-claims we can support with evidence today and which are aspirational.

24 · Counter-claim preparation

What's the strongest counter-argument a skeptical buyer could make to our claim "[paste]"? How should we respond?

Competitive prompts (prompts 25–32)

25 · Competitive-set audit

Our positioning brief names [Competitor A, B, C]. Our actual closed-lost data shows [paste]. Is the competitive set correct? Who should be added or removed?

26 · Competitor-move prediction

[Competitor X] recently [specific move]. What are they likely to do in the next 9 months? What signals should we watch to confirm the prediction?

27 · Build-vs-buy response

Some of our losses are to customers building internally. Our current response: "[paste]." Given the realistic cost of building [this capability], what would a more credible response be?

28 · Competitive-positioning gap

We position against Competitor A on [axis 1]. They position against us on [axis 2]. Is there an axis neither of us has claimed that we could credibly own?

29 · Category-leader displacement

[Competitor X] is the category leader. What specific positioning move would give us a credible displacement challenge? What would we need to invest to execute it?

30 · Incumbent defense

We're the incumbent in [specific segment]. A well-funded challenger is emerging. What positioning move defends our incumbent position, and how do we avoid the arrogance trap that usually sinks incumbents?

31 · Parity reframe

We're at feature parity with Competitor A. The buyer is evaluating us side-by-side. What axis moves the comparison off features and onto ground where we win?

32 · Competitive narrative mapping

Here are the three positioning narratives used by [Competitor A, B, C]: [paste]. Is there a fourth narrative available in our category that none of them is occupying? Could we credibly claim it?

Message consistency prompts (prompts 33–38)

33 · Cross-surface audit

Our homepage: [paste]. Our pricing page: [paste]. Our sales deck slide 3: [paste]. Score consistency across these three on category noun, ICP, claim, and voice. Name specific inconsistencies.

34 · Drift detection

Our positioning brief says: "[paste]." Here's a recent customer email: [paste]. Here's a recent blog post: [paste]. Do the operational surfaces match the brief? What's drifting?

35 · Messaging-variant generation

Our core claim: "[paste]." Generate three variants: one optimized for homepage hero, one for email subject line, one for LinkedIn post. Each should carry the same claim but match the surface's register.

36 · Voice-consistency check

Here are five pieces of content from different authors at our company: [paste]. Is the voice consistent? Where does it drift? What voice rules would bring them into consistency without flattening them?

37 · Platform-specific adaptation

Our homepage says: "[paste]." Produce LinkedIn, X, and email versions that preserve the core claim but match each platform's norms.

38 · Global-language consistency

Our English positioning: "[paste]." How should this translate to [specific market]? Are there cultural assumptions in our English version that won't land in the other market?

Launch and campaign prompts (prompts 39–43)

39 · Launch brief drafting

We're launching [feature] to [ICP]. The problem it solves: [paste]. The Layer 5 claim it supports: [paste]. Draft a one-page launch brief including positioning connection, target audience, primary message, proof, and CTA.

40 · Pre-mortem generation

Our launch of [feature] is in 3 weeks. Imagine it has failed at T+3 months. Walk backwards: what most likely went wrong? Produce the list of risks we should mitigate now.

41 · Launch-retrospective synthesis

Here are the post-launch metrics: [paste]. Here are 8 customer reactions: [paste]. What worked, what didn't, what would we do differently for the next launch?

42 · Campaign messaging generation

We're running a campaign to [specific goal]. ICP: [paste]. Generate three campaign-message variations, each emphasizing a different angle of our value proposition. Name the tradeoff between them.

43 · Launch-announcement drafting

Draft a launch announcement for [feature]. Five sections: one-sentence claim, why-now frame, substance, proof, action. No hype phrases; no "we're excited." Specific and outcome-focused.

Win/loss and customer-research prompts (prompts 44–47)

44 · Win/loss pattern synthesis

Here are 12 win/loss interview summaries: [paste]. What patterns emerge across these that aren't visible in any single interview? What strategic implications follow?

45 · Churn-risk detection

Here's a 30-minute customer conversation transcript: [paste]. Is this customer showing signs of pre-churn risk? What specific language or concerns would trigger retention intervention?

46 · Customer-voice extraction

Here are 20 customer quotes about our product: [paste]. Identify the 3 most common framings they use. Are any of these framings significantly different from how our marketing describes the product? Should we adapt?

47 · Interview-question design

We want to run win/loss interviews on [specific competitive situation]. Design the five-question interview script calibrated to this situation, and explain why each question is useful.

Strategic and board prompts (prompts 48–50)

48 · Board-update context

Summarize the quarter's strategic-relevant events: [paste bullet list]. Draft a 2-page board strategic-context memo structured as: market shift, company response, open questions, next-quarter focus.

49 · Strategic-narrative test

Here's our current corporate narrative: "[paste]." Is it differentiated from our two closest competitors' narratives: "[paste A]" and "[paste B]"? Where does it blur? How should we sharpen it?

50 · Five-year-view drafting

Based on where our category is heading and our current position, draft a 1-page view of what our company should look like in 5 years. Specifically: what category, what scale, what positioning, what competitive set. Flag the assumptions that would have to hold.

The reviewer discipline

Every prompt in the library produces draft work, not final work. Before using AI output in any external-facing artifact:

  • Verify every specific claim (numbers, named customers, citations). Hallucinations have decreased but persist.
  • Read the output through a "would a senior PMM produce this" filter. If it's plausible but bland, the prompt needed more specific context.
  • Check against your positioning brief for drift. AI tends toward category-average; your positioning should be specifically distinctive.
  • Rewrite for voice. AI defaults to generic professional tone. Your brand voice requires editing.

The prompts are tools. The reviewer discipline is what makes the tools useful. Teams that use the prompts without the review produce work that reads as competent but generic. Teams with the review produce work that carries specific insight. The difference is entirely in the human-in-the-loop, not in the AI.

Related Stratridge Tool

Analyst

AI strategy advice grounded in your own context — not generic playbooks.

The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.

  • Reads your own positioning data before responding
  • Grounded in audit findings and competitor signals
  • No hallucinated advice — evidence cited inline
Ask the Analyst →
The Stratridge Dispatch

One sharp B2B marketing read, most Thursdays.

Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.

Keep reading