Synthetic data and synthetic personas for market research

Does synthetic research actually work?

You've heard about synthetic data and synthetic personas for market research. The promise sounds compelling — faster answers, lower costs, intelligence in areas where traditional research can't reach.

But is it real? Or is it AI theatre dressed up as methodology?

Here's what the evidence shows — and what synthetic research does well for professional services firms.

The short answer: synthetic data work for the right questions

Stanford University peer-reviewed research showed synthetic personas match human reasoning patterns with 85% consistency. That's not a limitation — that's the ceiling of human consistency itself. (Ask the same people the same question two weeks later, and they'll give you the same answer about 85% of the time.)

Journal of Marketing (2025) published research showing 95% alignment between synthetic persona responses and real human data across multiple research contexts.

Who's using it: Microsoft, Nvidia, Anthropic, BP, J.P. Morgan, UK Foreign Office. These organisations make high-stakes strategic decisions. They're not experimenting with unproven methodology.

The pattern: Studies from Stanford, Harvard, NYU, and published research by Brand et al., Li et al., Dillion et al., and Sarstedt et al. consistently show 75–95% consistency between synthetic persona reasoning and real human reasoning.

The methodology is validated. The question is what it's good for — and what it isn't.

What synthetic research is (and isn't)

Here's what synthetic research does well

Synthetic personas surface how professional decisions work. Who's involved. What criteria actually matter. What gets you rejected before you're seriously considered. What's just expected versus what's genuinely differentiating.

Synthetic data reveals the parts clients can't or won't tell you. The criteria they apply without consciously knowing it. The reasons they'll never put in feedback forms. What happens in committee rooms you're not in.

Synthetic research lets you ask questions that would be career suicide with real clients. "What would make you fire us?" "When do you think we're overpriced?" "What do our competitors do better?"

Here's what synthetic research doesn't do

Synthetic personas can't predict what any specific client will do. Synthetic research shows you how decisions in your category typically work — not how your next pitch will go.

Synthetic research doesn't count buyers or estimate market size. The outputs of synthetic research are about understanding, not statistics. Percentages in findings are convergence indicators (how consistently something showed up), not population estimates.

Synthetic data doesn't replace client relationships. Your conversations with clients tell you about your relationship. Synthetic research tells you about your category.

The honest frame: Synthetic research is exploratory intelligence. It surfaces what you need to know before you talk to clients — and what clients can't (or won't) tell you even when you do.

The terminology of synthetic data

Synthetic data — intelligence generated from synthetic personas

Synthetic personas — AI-generated representations of professional roles and buyer types

Client Proxies — our term for the detailed models we build to represent specific decision-makers in your category. Not generic "CFO" personas, but models that reflect the constraints, accountabilities, and reasoning patterns of CFOs evaluating firms like yours for matters like yours.

The quality of synthetic research depends entirely on how well these models are built. Generic prompts produce generic outputs. Rigorous modelling produces actionable intelligence.

What synthetic research can't tell you

Being clear about limitations is how you distinguish rigorous methodology from AI theatre.

How your specific firm is perceived. Unless you're Deloitte or McKinsey, your firm's reputation isn't in any mainstream LLM's training data. But beware because that will not stop ChatGPT from confidently fabricating what it thinks clients believe about you.

Brand-specific sentiment. Again, unless you're BMW and Audi, you'll get confident hallucinations to questions like "Do clients prefer us or Competitor X?". That's a question for actual clients who know both firms.

Quantitative market sizing. "What percentage of GCs would buy this?" requires representative sampling of real humans.

Highly contextual decisions. If the answer depends on personal relationships, organisational politics, or unique circumstances, you need real conversations. A synthetic person hasn't actually met the head of your M&A practice.

What synthetic research can tell you instead

We can't tell you if clients see your firm as "innovative". However, we can tell you whether "innovation" matters in your category, what evidence buyers trust when evaluating innovation claims, and whether your competitors are already owning that territory.

That's often more useful than knowing your current perception — because it tells you where to go.

Why synthetic research works for professional services

Synthetic research works best when:

Buyers reason predictably based on role. A GC evaluating litigation lawyers faces the same accountability pressures as other GCs — regardless of personality, gender, age or race. A CFO approving a major engagement applies similar scrutiny to other CFOs. Professional roles rather than personal characteristics create consistent reasoning patterns.

Decisions involve multiple people you can't all access. In a client listening exercise, you might get the relationship partner's view. You won't get the risk committee's internal discussion. Synthetic research surfaces what's likely happening in rooms you're not in.

The questions that matter most can't be asked directly. "What would make you question our competence?" "What do you assume about firms our size?" These are the questions that reveal positioning opportunities — and the questions no client will answer honestly.

You need to iterate before you commit. Test your new positioning before the rebrand. Explore a service line before the investment. Understand a new market before you enter.

Why you can't just "ask ChatGPT" for synthetic data

Prompting ChatGPT "You are a GC evaluating law firms — what matters to you?" isn't synthetic research.

The instant response tells you it's averaging patterns from training data, not modelling the reasoning of a specific type of decision-maker facing a specific type of decision.

What rigorous synthetic research in professional services requires

Knowing which dimensions matter. Not all GCs think alike. Industry, company size, risk appetite, prior firm experiences, internal politics — these shape reasoning. You need to know which dimensions are relevant and how they interact.

Knowing where LLMs are reliable — and where they fabricate. LLMs are good at replicating professional reasoning patterns. They're bad at knowing your firm's reputation and admitting uncertainty. If you don't know the boundaries, you'll get confident-sounding fiction.

Expertise that connects findings to action. Raw intelligence isn't strategy. Knowing what's differentiating versus what's just expected — and how to position against it — requires expertise in both the category and in persuasion.

How we apply proven synthetic research methodology

Asymmetric Strategic Intelligence (ASI) is synthetic research built specifically for professional services firms.

We build detailed Client Proxies representing the decision-makers in your category — distributed by role, sector, geography, firm size, and whatever dimensions define your buyers. Then we explore how they reason when evaluating firms like yours.

What you get:

  • What actually drives decisions in your category — not assumptions, evidence
  • Who matters in buying decisions and what each role cares about
  • What's genuinely differentiating versus what everyone claims
  • What gets you rejected before you compete
  • How buyers interpret your current positioning versus how you think it lands

What you don't get: Predictions about specific clients. Statistical estimates of market size. Replacement for client relationships.

The difference: We don't just surface intelligence. Thirty years of direct response experience means we translate findings into positioning that lands — messaging that survives committee scrutiny and moves decisions.

Is synthetic research right for your situation?

If you're evaluating whether this methodology fits, start with a specific question. What do you need to know about how decisions work in your category? Then ask us.

We'll tell you honestly whether synthetic research can answer it — and where its limitations apply.

See if ASI fits your situation

Book a 30-minute discovery call. We'll explore your situation, identify your most critical intelligence gaps, and confirm whether ASI is the right approach for your specific question.

If it is, you'll know exactly what you'd receive and when. If it isn't, we'll tell you that directly.

Prefer to email? Email us at asi@taleist.agency.