Skip to content

How I Use AI to Accelerate Research-Led UX Strategy

I use AI to save 20% time on UX research synthesis without sacrificing rigor—freeing up capacity for strategic problem framing and stakeholder influence. Includes a case study of turning 400+ observations into quantitative prioritisation using hybrid scoring mechanisms.

Full read: 23 minutes | Quick scan: 5 minutes | TL;DR: 1 minute

Five vintage toy robots in blue, purple, and grey lined up on white background representing AI as tools for UX research.
AI as co-pilot, not autopilot—tools that amplify human judgment, not replace it. Photo by Eric Krull on Unsplash.

TL;DR

1-minute read

As a Lead UX Strategist, I use AI to accelerate research-led discovery—saving ~20% time on synthesis and documentation so I can spend more time on strategic problem framing, stakeholder influence, and product collaboration.

AI handles the time-consuming parts of research (sentiment analysis, scoring mechanisms, report drafting). I handle the strategic parts (problem reframing, opportunity identification, roadmap influence).

My approach: forward planning from the start, obsessive PII removal (critical in regulated industries like insurance), structured data for better AI inputs, and always validating against raw data.

AI is my research co-pilot, not my strategy co-pilot.


My Philosophy on AI in UX

2-minute read

I'm a Lead UX Strategist who specialises in discovery-led, research-driven design. My strategic work—problem reframing, opportunity identification, stakeholder influence, roadmap input—requires human judgment, business context, and years of domain expertise.

AI doesn't do my strategic thinking. It accelerates the time-consuming parts of research so I have more time for strategy.

When I kick off a research study, I meticulously map my end-to-end workflow and identify where AI can ethically and efficiently act as my research co-pilot—not replace my judgment.

Here's the trade-off:

The result: 20% time savings on research synthesis, which means 20% more time for the strategic work that actually moves the needle—understanding users, reframing problems, and influencing product decisions.

But AI is only as good as the human judgment guiding it. I validate everything. I remove PII obsessively. And I always go back to the raw data to sense-check what AI tells me.

AI is a research co-pilot—not a replacement for the skills required to be a great UX strategist.

This isn't about cutting corners. It's about doing better work, faster.


The 400+ Observations Case Study

7-minute read

The Challenge

In October 2025, I led By Miles' first-ever remote, moderated member interview study. Three hour-long sessions exploring how members understand mileage usage and cost information.

The problem: Three participants. Eight themes. 400+ observations. A senior product manager who needed answers fast.

Here's how I used AI to turn qualitative research into quantitative prioritisation—without sacrificing depth or rigor.

Phase 1: Manual Foundation (6 Days)

What I did WITHOUT AI

Why this matters: AI can't watch videos. It can't read body language. It can't contextualise what a user does versus what they say. That human observation is irreplaceable.

This is where strategic judgment happens—identifying which observations matter and why.

Phase 2: AI-Assisted Analysis (3 Days)

1. Sentiment Analysis (Using ChatGPT)

I fed ChatGPT small chunks of the spreadsheet—one task/question at a time—and asked it to extract:

Why 7? Not based on a framework—I found 5 wasn't enough, and 10 was too many to work with efficiently across multiple participants.

Critical lesson: I sense-checked every single insight against the raw transcripts. ChatGPT hallucinates. It makes up quotes. It confuses one participant's feedback with another's if you don't keep conversations short and focused.

Error rate: Roughly 15-25% of ChatGPT's output needs correction—either hallucinations, misattributions, or generic insights that aren't grounded in data.

2. Empathy Categorisation

I asked ChatGPT to analyse each observation and populate:

This is based on empathy mapping principles, adapted for spreadsheet format because mapping 400+ observations onto a traditional empathy map would be counterproductive.

Why "Does" matters most: What users do (or don't do) is more reliable than what they say—especially when you're offering a £50 Amazon voucher to participate. As a strategist focused on improving the app experience, I weight observed behaviour highest because it reflects actual member needs, not just stated preferences.

3. Creating a Hybrid Scoring Mechanism

This is where AI became genuinely transformative.

I struggle when it comes to writing very complex Google Sheets formulas. Without AI, I would have spent 2 days trying to build a basic scoring system—or more likely, given up and manually prioritised insights with the product team using a RICE matrix.

Instead, I worked with ChatGPT to create:

MetricCalculationPurpose
Evidence WeightDoes = 3, Says = 2, Feels = 2, Thinks = 1Observed behavior is most reliable
Sentiment WeightNegative = 3, Idea = 2, Neutral = 1, Positive = 0Prioritises actionable insights
Lens WeightPain = 3, Neutral = 2, Gain = 1Gives more weight to pain points
Theme FrequencyCount of observations with same themeIdentifies recurring patterns
Opportunity ScoreSentiment + Lens + log(1 + Frequency)Highlights impactful, recurring issues
Pain CountObservations where Lens = Pain for themeIdentifies problem areas
Gain CountObservations where Lens = Gain for themeIdentifies positive experiences
Tension IndexAbsolute difference: Pain - GainSpots conflicting feedback

But—and this is critical—I didn't blindly accept these formulas.

I manually stepped through the calculations, questioned ChatGPT's methodology, and refined the scoring until it felt authentic and defensible. Some metrics (like Tension Index) I ultimately decided not to use in my final recommendations because they didn't pass my gut-check test.

AI enabled sophistication I couldn't achieve alone—but human judgment validated every formula.

The result:

Multiple analytical "views" I could filter and sort:

Phase 3: Report Writing (Hours, Not Days)

How AI helped:

I created two documentation worksheets in the Google Sheets:

These weren't originally intended for AI—they were for stakeholders. But they became a stroke of genius for re-educating ChatGPT in a fresh conversation.

My process:

  1. Started a new ChatGPT conversation (to avoid context bleed).
  2. Fed it the final spreadsheet.
  3. Had it read "Start Here" and "Key / Column Definitions" to understand the structure.
  4. Outlined my preferred report format (Executive Summary → What We Learned → What We Don't Know → Next Steps).
  5. Asked ChatGPT to draft sections using a top-down approach (TL;DR first, detailed sections later).
  6. Heavily edited for tone, nuance, accuracy, scannability, and plain English.
  7. Ran everything through Grammarly for proofreading.
  8. Made final edits based on Grammarly suggestions and my own judgment.

Strategic framing was mine. AI handled the grunt work of structuring and drafting.

Time saved: What would have taken several days took a few hours.

The Outcome

Stakeholder reception:

I presented the spreadsheet at our Product Team "Show & Tell."

"Speechless."
— Head of Product (and he's not one to offer much emotion)

"You've found a way of analysing qualitative research using quantitative analysis methods. That's impressive!"
— Product Designer I line manage

The entire team asked about the scoring methodology and agreed it's a fantastic reusable template for future studies.

Strategy impact:

The scoring mechanism didn't just organise insights—it gave the product team a defensible framework for prioritising initiatives. "Top Opportunities" became discussion points for roadmap planning. "Most Friction" drove immediate consideration for quick wins. The hybrid approach I built is now the team's standard for turning research into strategic decisions.

The study also proved to leadership that we could run member research efficiently and effectively—paving the way for more discovery work to inform product strategy.

Time saved:

ApproachTime
Without AI~11-12 days (9 days synthesis + 2-3 days report writing)
With AI~9.5 days (9 days synthesis + few hours report writing)
Efficiency gain~20%

That's 2 days freed up for strategic work: stakeholder collaboration, problem reframing, roadmap input, and discovery planning.


How AI Supports Research-Led Strategy

4-minute read

1. Making Insights Memorable Through Storytelling

At By Miles, our member base is incredibly diverse. A 22-year-old city driver has completely different needs than a 65-year-old rural driver. Generic insights don't work—stakeholders need to understand specific journeys.

I use AI to help craft member journey narratives that bring insights to life. My prompt framework:

Tell this member's story like Pixar's John Lasseter would—focusing on their emotional journey, the friction they encounter, and what they need from us.

Why Pixar? Because Lasseter is one of the greatest storytellers of the modern era. Pixar stories work because they're emotionally resonant, universally relatable, and structured around clear challenges and resolutions.

Example:

Rather than presenting "37% of members feel anxious about running out of miles," I craft a narrative:

Meet Sarah, a 45-year-old teacher who bought By Miles to save money on her short commute. She loves seeing her savings grow—until she books a trip to Scotland. Suddenly, she's panicking: 'Will I run out of miles? Should I top up now or later? What happens if I go over?' Her anxiety isn't about understanding the cost—it's about not feeling in control.

This approach helped the product team reframe the problem from "cost comprehension" to "confidence and control"—which completely changed our solution approach.

Process:

  1. I identify the insight manually (from research).
  2. I outline the member's context, friction points, and emotional state.
  3. I ask Claude or ChatGPT to help craft the narrative using the Pixar framework.
  4. I refine for authenticity, tone, and strategic framing.

Time saved: Turns 1-hour stakeholder prep into 20 minutes, but the strategic framing is still mine.

2. Validating Discovery Exercises

Working at a fast-growing startup like By Miles means balancing speed with rigor. I use AI to validate and accelerate discovery exercises—but I always start with my own thinking first.

My approach:

Abstraction Laddering:

Positive/Negative Inversion:

Five Whys:

Critical principle: I always generate my own ideas first. I set a timer (usually 15 minutes) or a minimum number of ideas (usually 10) before using AI.

Why? To avoid AI anchoring my thinking. My strategic framing comes first. AI helps me see what I might be missing.

What AI hasn't done: Reframe problems better than I would manually. It's just sped up the process of me identifying the right problems to solve by breaking down larger problem areas into hyper-focused ones.

3. Competitive Scanning

For competitive analysis, I've found AI helpful in pinpointing similar insurers outside the UK that aren't direct competitors but offer similar propositions—especially startup "disruptors" in other regions.

Example: I ask ChatGPT to identify pay-per-mile or usage-based insurance startups in Scandinavia, South America, or Australia that I can learn from.

Reality check: Without AI, I could still find these companies through extensive Googling, LinkedIn, or startup platforms like Wellfound. AI just helps me find them faster—most of the time, but not all of the time.

Where AI falls short: There have been occasions where I've uncovered similar companies manually that ChatGPT didn't return. AI accelerates the grunt work of scanning, but it doesn't replace strategic judgment about which competitors matter and why.

4. Sacrificial Concepts (Image Generation)

I occasionally use ChatGPT's image generation (DALL-E) to create rough visual concepts for workshops or stakeholder discussions—deliberately low-fidelity to avoid fixation on polish.

Example: Generating 3-5 different visual approaches to explaining pay-per-mile insurance benefits before asking a designer to refine the winning concept.

Why "sacrificial"? These aren't meant to be polished. They're meant to spark discussion and test strategic directions quickly without committing design time.

5. Documentation Efficiency

Confluence planning documents:

Process: I outline the structure and key points → ChatGPT drafts sections → I refine for tone, accuracy, and context.

Time saved: Turns 2-hour documentation tasks into 30-minute tasks, freeing up time for strategic work.

6. Sense-Checking Wording (Using Claude + Grammarly)

Before sharing research reports or stakeholder presentations:

Why Claude over ChatGPT for this? I find Claude more nuanced with language, better at British English, and more thoughtful about tone—particularly important when communicating with senior stakeholders.


When AI Fails (And Why That Matters)

2-minute read

The Google Gemini Deep Research Disaster

For complex strategic questions—like member attitudes toward safer driving or the use of driving data for personalised insurance—I experimented with Google Gemini's "Deep Research" feature.

The promise: AI conducts hours of research in minutes, synthesising credible sources and delivering strategic insights.

The reality: Gemini made up insights, fabricated sources, and presented false information as fact. Sources that looked legitimate and credible—complete with author names, publication dates, and URLs—turned out to be completely invented.

What I learned: AI can't be trusted for strategic research that requires accuracy and depth.

Deep, complex topics still require human expertise, domain knowledge, and rigorous verification. The time spent fact-checking AI output takes longer than doing the research manually.

My Approach Now

For strategic discovery work, I rely on:

The lesson: AI accelerates the grunt work. It can't replace strategic judgment, deep subject-matter expertise, or the rigor required for business-critical research.

This experience reinforced my philosophy: AI is a research co-pilot, never autopilot.


Where AI Doesn't Belong

2-minute read

I explicitly DO NOT use AI for:

Conducting live user interviews — Human empathy, follow-up questions, and reading the room are irreplaceable. AI can't build rapport, adapt in real-time, or pick up on subtle cues that reveal deeper insights.

Watching research videos — Body language, tone, hesitation, facial expressions—these tell the story the transcript misses. At By Miles, I watch each video twice because what members do often contradicts what they say. AI can't see what I see.

Making strategic decisions — AI informs. Humans with business context decide. I use AI to surface insights and patterns, but I make the call on what matters, why it matters, and how it should influence product strategy.

Final recommendations without validation — I always verify AI-generated insights against raw data. Hallucinations are real. Made-up quotes happen. I never present AI output without rigorous sense-checking against original transcripts.

Stakeholder persuasion — Trust is built through human connection. AI can draft a presentation, but I deliver it, answer questions, handle objections, and build buy-in through relationship and credibility.

Design critique — Nuance, taste, and judgment can't be outsourced. AI can suggest improvements, but it can't evaluate quality, appropriateness, or brand alignment the way a seasoned strategist can.

Strategic research on complex topics — Google Gemini Deep Research fabricates sources and insights. For questions requiring accuracy and depth (like member attitudes toward driving data or safer driving incentives), I always do the research manually—even if it takes longer. Verification is non-negotiable.

Assumption mapping — I map assumptions myself based on years of domain expertise, business context, and customer understanding. AI doesn't have the strategic lens or organisational knowledge to identify the right assumptions to test.

Roadmap prioritisation — While AI helped create the scoring mechanism for research insights, product roadmap decisions require business context, risk assessment, technical feasibility, and stakeholder alignment that AI can't provide. I lead through evidence-based design, but the strategic judgment is human.


Principles I Follow

3-minute read

1. Remove PII Obsessively

Working in insurance (a heavily regulated industry) has made me paranoid about Personally Identifiable Information (PII)—in the best possible way.

Before sharing anything with AI, I scrub:

I have a mental checklist. This protects members and ensures compliance with FCA regulations and By Miles' internal data governance policies.

Why this matters strategically: In regulated industries, one data breach can destroy trust and trigger regulatory action. My rigorous approach to PII protection isn't just ethical—it's a competitive advantage when working in fintech, healthtech, or any industry handling sensitive data.

2. Keep Conversations Short and Focused

ChatGPT loses context in long threads. It confuses participants. It hallucinates.

My rule: One task or participant per conversation. When I need to analyse multiple participants, I start fresh each time.

Why: Context bleed is real. In the 400+ observations study, I learned the hard way that feeding multiple participants into one conversation caused ChatGPT to attribute Participant A's feedback to Participant B. Starting fresh prevents this.

3. Always Validate Against Raw Data

I never trust AI output blindly. Every insight, every quote, every theme gets cross-referenced with the original transcripts.

Error rate: Roughly 15-25% of ChatGPT's output needs correction—either hallucinations, misattributions, or generic insights that aren't grounded in data.

Most common errors:

The discipline of validation is where strategic judgment happens. AI surfaces patterns. I determine which patterns matter.

4. Structure Data for AI

AI can't read FigJam. It struggles with unstructured text dumps.

My workflow:

Why this matters: The 400+ observations study worked because I structured data intentionally. Without that structure, AI would have been useless.

Strategic principle: Garbage in, garbage out. The quality of AI output depends entirely on the quality of human-prepared inputs.

5. Use AI to Enable Work I Wouldn't Attempt Manually

The Google Sheets scoring mechanism is a perfect example. Without AI, I wouldn't have built it—not because I didn't want to, but because I lacked the advanced formula expertise and it would have taken 2+ days I didn't have.

AI didn't replace my judgment. It enabled sophistication I couldn't achieve alone.

Other examples:

The key: I always provide the strategic thinking first. AI amplifies it.

6. Forward Planning from the Get-Go

The moment I start a research study, I'm thinking:

Planning beats improvisation every time.

The 400+ observations study worked because I planned the workflow before I started:

  1. Manual observation capture (AI can't do this)
  2. PII removal (non-negotiable)
  3. Structured data transfer (enables AI analysis)
  4. AI-assisted sentiment analysis (saves time)
  5. AI-powered scoring mechanism (enables sophistication)
  6. Human validation (catches errors)
  7. AI-assisted report drafting (accelerates documentation)
  8. Human editing and strategic framing (ensures quality)

Every step was intentional. That's why it worked.


The Most Important Lesson

1-minute read

Forward planning is everything.

When I kick off research, I meticulously map my end-to-end workflow and identify where AI can ethically and efficiently act as my research co-pilot—not replace my strategic judgment.

Context is key. Well-structured prompts make all the difference. And knowing what AI can't do is just as important as knowing what it can.

AI is a research co-pilot—not a replacement for the skills required to be a great UX strategist.

It can't watch videos. It can't read body language. It can't make strategic decisions. It can't build trust with stakeholders. It can't reframe problems better than I can.

But when used thoughtfully, it can help me analyse faster, synthesise deeper, and document more efficiently—freeing up 20% more time to do the work that actually moves the needle: understanding users, reframing problems, influencing product decisions, and driving strategic outcomes.

AI handles the grunt work. I handle the strategy.


What This Means for My Work

1-minute read

Efficiency: I can complete research studies ~20% faster without sacrificing depth or rigor—freeing up 2+ days per study for strategic work.

Sophistication: I can build hybrid quant/qual scoring systems that would be impossible manually, giving product teams defensible frameworks for prioritisation.

Scalability: I've created a reusable template that the entire product team can leverage for future studies, raising the bar for how we turn research into strategy.

Ethics: Working in insurance has taught me to be rigorous about PII removal and regulatory compliance—a competitive advantage in any regulated industry (fintech, healthtech, legal, etc.).

Strategic focus: By offloading time-consuming synthesis and documentation to AI, I spend more time on high-value work: stakeholder influence, problem reframing, discovery planning, and product collaboration.


Let's Talk AI in UX

If you're hiring
I'm seeking fully remote Lead UX Strategist roles (£80k–£95k, core UK hours) where discovery and strategic thinking drive product decisions.
View my CV | See my case studies

If you're curious about this approach
I'm always happy to discuss AI in UX research or talk about how this methodology could work for your team.
Connect on LinkedIn | Email me

Next

How I Lead: Through Influence, Mentorship, and Collaboration