Using NotebookLM to focus on the human element in UX research

James Mellor, Head of UX

06.03.26

UX_Notebook_LM_Blog_image_1200x1200px

UX teams are rarely short on research. Transcripts. Survey data. Analytics exports. Workshop notes. A Miro board full of good intentions. The real bottleneck is what happens next.

 

If insight doesn’t travel, it dies. It sits in a folder, gets “shared”, and quietly stops influencing decisions. Then the team defaults back to opinions, urgency, and whoever is loudest in the room.

NotebookLM has helped me with the unglamorous part of research: turning a pile of inputs into something teams can actually use. Not as a replacement for thinking. Not as a magic button. More like a research assistant that is only allowed to speak when it can point to the source.

The trust problem with AI in research

Most AI tools are good at producing confident output. That’s the problem.

In UX, you need to know where an answer came from. If a decision affects conversion, retention, operational cost, or brand trust, “the model said so” is not evidence.

What I like about NotebookLM is the constraint: you give it the source material, and it answers from that. It also cites where it pulled things from. That changes the feel of the work.

Instead of “asking AI”, I’m interrogating a defined set of research inputs. I can check the citations. I can challenge the summary. I can track back to the original quote or note.

What I actually upload (and what I don’t)

I’m not doing anything exotic. The value comes from volume, consistency, and having everything in one place.

Typical inputs I’ll add:

  • Interview transcripts (or notes if transcripts are not available)
  • Stakeholder workshop notes
  • Usability study insights
  • Survey results (raw data plus a cleaned summary)
  • Analytics exports (eg GA4 snapshots)
  • Competitor reviews and desk research
  • The list goes on…

A small but important point: I treat this like a shared evidence base, not a dumping ground. If the inputs are messy, the outputs will be messy.

Cutting synthesis time without cutting corners

Before this, synthesis meant a lot of manual work:

  • cross-referencing spreadsheets with transcripts
  • copying quotes into themes
  • scanning workshop notes for patterns
  • trying to keep the narrative coherent across sources

That work still matters. It just doesn’t all need to be done by hand. NotebookLM can speed up the first pass:

  • surfacing repeated themes
  • pulling relevant quotes to support a point
  • answering “what did users say about X?” across multiple sources
  • summarising what’s in the data before I add judgement and context

In practice, that can turn days of wrangling into a few focused hours for certain types of projects. Not every project. Not every dataset. But enough to matter.

The gain is not “faster research”. The gain is more time on the parts only humans can do: interpretation, trade-offs, and turning insight into decisions.

A real example: accelerating a handover on a complex project

On a recent project for a large premiership rugby club, the research volume was heavy.

Multiple rounds of interviews, analytics, internal stakeholder input, and competitor review. Midway through delivery, I needed to bring another team member up to speed quickly.

Normally, that’s slow and expensive:

  • a stack of docs to read
  • a few catch-up sessions
  • lots of “sorry, where did that come from?” questions
  • and a decent risk of missing nuance

Instead, I set up a NotebookLM workspace with the project research inputs and created a few starting artefacts:

  • a concise summary structure (useful as a base for a deck or walkthrough)
  • an audio overview of the key themes (useful for getting context quickly)
  • a place where they could ask questions and trace answers back to sources

The difference was obvious in the first working session.

They didn’t need a guided tour of the basics. They went straight to the tricky bits: tensions between stakeholder goals and user behaviour, where the analytics contradicted assumptions, and what we could actually change within delivery constraints.

That’s the point. Handover is not about “knowing everything”. It’s about getting to good decisions faster, without losing evidence or nuance.

Making research easier for clients to engage with

Clients are busy. Product teams are busy. Everyone is busy. Not everyone has time (or appetite) for a 60-page research report. And they shouldn’t have to.

One thing NotebookLM has helped with is packaging insight in formats that travel better. The audio overview feature is a good example. It turns the content into a conversational summary, which makes it easier for stakeholders to absorb:

  • on a commute
  • between meetings
  • away from a desk

It’s not a replacement for a proper read-out. It’s a bridge into the evidence so that the eventual discussion is higher quality.

I’ve also found it useful to tailor summaries by audience using the same underlying sources:

  • a senior stakeholder usually needs implications and risk
  • a product team needs friction points, blockers, and what to change first
  • delivery teams need clarity on scope and constraints

Same evidence. Different emphasis. Less rework.

Where AI stops and human judgement starts

This bit matters, so I’ll say it plainly. NotebookLM does not replace UX professionals.

It does not:

  • decide what research you should run
  • spot what’s missing from your evidence
  • replace interviews, observation, or lived experience
  • understand organisational politics and delivery constraints
  • take responsibility for a decision

It’s only as good as what you feed it. If the research design is weak, the output will be weak. If the inputs are biased, the summary will reflect that. If you ask a leading question, you’ll get a neat answer to the wrong problem.

My rule is simple: AI can help with scale and speed. Humans handle meaning, context, and responsibility.

How this changes the way I run research

The best outcome here is not “AI content”. It’s research that gets used.

In my work at Trunk, NotebookLM supports three practical shifts:

1) Faster access to evidence
Less time hunting, more time deciding.

2) Better alignment earlier
More stakeholders can engage with the actual research, not a second-hand summary.

3) Less wasted effort
When insights are easier to revisit, teams are less likely to cover the same ground again and again.

That is what makes research commercially useful: it reduces decision risk and keeps delivery moving.

A practical closing thought

AI in UX is not about replacing researchers. It’s about removing friction in the workflow so that more time goes into judgement, collaboration, and making the product better.

NotebookLM is not perfect. It still needs QA against sources. It still needs a responsible human in the loop. But as a way to make research easier to interrogate, easier to share, and harder to ignore, it’s been genuinely useful.

If your team is sitting on a pile of research that nobody is using, that’s fixable.

If you want help applying AI safely in research and product work, we can support you with a practical pilot, governance, and a workflow your team will actually stick to.

Next step: use the form below to book a short call and we’ll work out whether a pilot makes sense.

 

If you want to stay ahead with the latest insights, sign up to our exclusive WhatsApp newsletter today.

Get in touch to speak
with our UX experts
Call +44 (0) 161 711 1000hello@trunk.agency