All Days
Day 7

Founder OS — NotebookLM + Claude Code

claude code series — day 07

you've been paying Claude
for research it shouldn't be doing

there is a free tool that handles all of it.
NotebookLM — powered by Google Gemini — processes your docs, APIs,
YouTube tutorials, entire codebases.
Claude just builds from the output.

notebooklm — freeinfinite memory for Claudezero token wasteno hallucinations on outdated APIsno bloated CLAUDE.md
scroll to explore

research and build — separated

two tools, two jobs, zero overlap. this is what eliminates hallucinations.

01 — research
drop your sources into NotebookLM
PDFs, API docs, YouTube tutorials, entire codebases. NotebookLM processes everything using Gemini and generates a structured implementation plan.
free — Google pays the tokens
02 — handoff
pass the grounded plan to Claude Code
NotebookLM outputs a structured, source-anchored implementation plan. Claude doesn't research. it receives the plan and executes from it.
the key handoff moment
03 — build
Claude Code just builds
no hallucinations on outdated APIs. no token burn on context gathering. no vague outputs from stale training data. pure execution from today's source.
token-efficient output
most people stuff their CLAUDE.md with docs and API references trying to give Claude "context."
that is the wrong job for CLAUDE.md.

NotebookLM handles all that for free. CLAUDE.md stays focused on rules and behavior only.

two ways to use it — you choose

either path gives Claude access to everything you've ever fed into NotebookLM. permanent. searchable. available in every future session — no re-explaining, no re-uploading.

install
$uv tool install notebooklm-mcp-cli
orpip install notebooklm-mcp-cli  |  pipx install notebooklm-mcp-cli
$nlm login
what you can do
$nlm research start "shadcn v4 components"
$nlm notebook create "My Project"
$nlm source add MyProject --url "https://ui.shadcn.com/docs"
$nlm notebook query
$nlm studio create MyProject
$nlm share public MyProject
drop any URL, PDF, YouTube video, Google Drive file, or codebase as a source. Claude gets infinite, permanent memory from everything you feed it. never re-explains the same thing twice.
install — one command
$claude mcp add notebooklm npx notebooklm-mcp@latest
Chrome opens → sign in with Google → restart claude code → /mcp to verify
inside claude code — just talk to it
"Add this URL to my library"
"How does the new Sheet component work?"
"Search my notebooks for authentication patterns"
"What did we decide about the database schema last week?"
Claude queries NotebookLM autonomously as a tool — no switching windows, no copy-paste. everything you've ever fed it is permanently available. your entire knowledge base, inside every session.

3 workflows that actually work

not theoretical. these are the exact use cases that convinced me to switch.

before
building with shadcn docs
Claude Code pulls from its training data — which was last updated months ago. it uses deprecated component APIs. you get build errors. you spend an hour debugging packages that no longer exist.
with notebooklm
drop the latest shadcn docs into NotebookLM
NotebookLM extracts the exact new components, props, and patterns from the current docs. Claude Code builds your dashboard using only those components.
first shot. no outdated package errors. no debugging session.
before
watching a YouTube tutorial
you watch an hour-long tutorial on Python async patterns. you take notes. you copy some code. Claude tries to replicate it from stale training data and gets the patterns slightly wrong.
with notebooklm
drop the tutorial URL into NotebookLM
NotebookLM transcribes and summarizes the exact patterns from the video. Claude Code generates working code from that summary — not from training data from 2024. from today's source.
working implementation from the current tutorial. zero drift.
before
onboarding a new engineer
new teammate joins. you spend 3 hours on calls walking them through the codebase. they still ask you questions for the next 2 weeks because context doesn't transfer cleanly.
with notebooklm
drop your codebase into NotebookLM
NotebookLM generates a structured onboarding document — architecture overview, key patterns, gotchas, decision log. it can also generate an audio walkthrough your new teammate can listen to.
your new teammate gets context without asking you anything.

CLAUDE.md has a 200-line auto-load limit

Claude only reads the first 200 lines of your CLAUDE.md automatically. everything after that requires manual loading. this changes how you should structure it.

wrong approach
CLAUDE.md as a knowledge dump
  • ×full API documentation pasted in
  • ×tutorial walkthroughs and how-tos
  • ×framework setup instructions
  • ×tool comparisons and research notes
  • ×reference tables and version histories
right approach
CLAUDE.md as a behavior layer
  • rules for how Claude should respond
  • project structure and naming conventions
  • build command + test command (validation loop)
  • what it can and cannot do autonomously
  • pointers to external context (NotebookLM)
CLAUDE.md = rules and behavior. NotebookLM = research and context.

keep them separate and both get better.
your CLAUDE.md stays under 200 lines. NotebookLM absorbs the research load for free.
day 07 of 30 — claude code series

this is day 07.
there are 23 more guides like this.

every week i share what's actually working in Claude Code — setups, workflows, and the things nobody else is writing about yet.

follow on LinkedIn for the next ones
30-day claude code series  ·  ayautomate.com

don't miss what's next.

Playbooks, templates, and tools that actually save you hours. Straight to your inbox. No spam. Unsubscribe anytime.