Deep-Diving into Claude Code Source

The full conversation log from a weekend-long first-principles investigation of two leaked Claude Code source trees — v0.2.8 and v2.1.88

April 5–6, 2026 14 sessions 540,000+ lines of TypeScript Claude Opus 4.6

Read the Blog Post →

I have the extracted source of Claude Code v0.2.8. Walk me through the end-to-end architecture, from when a user types a command to when they see the response. Start from the top.

Mapping the full execution flow from CLI entry to API response

Claude Code is a TypeScript terminal application built with Ink (React for the terminal), the Anthropic SDK, Commander.js for CLI parsing, and Zod for input validation. Internal codename: "Tengu" — visible in all analytics events like tengu_init, tengu_api_query.

The end-to-end flow:

User types `claude "fix the bug"` | [1] cli.tsx -- Entry point. Commander.js parses args. | Two modes: --print (non-interactive) or REPL v [2] REPL.tsx -- React component. Renders prompt, messages, | permission dialogs. On input: calls query() v [3] query.ts -- THE AGENTIC LOOP. async generator. | Call API -> yield response -> if tool_use, | execute tools -> RECURSE -> until end_turn v [4] claude.ts -- API client. Prompt caching, 3 providers | (Direct, Bedrock, Vertex), retry logic, | cost tracking. Temperature = 1. v [5] tools/* -- Each tool: { name, inputSchema, call() } call() is an async generator yielding 'progress' or 'result'

15 tools total. Read-only tools run concurrently (up to 10). Write tools run serially. Everything gates through the permission system: always-allowed (reads), session-scoped (file writes), persistent (bash commands).

How is the system prompt assembled? Is prompts.ts appended to some original Claude system prompt, or is it the whole thing?

Tracing system prompt assembly from getSystemPrompt() to the API call

It's the whole thing. The system field in the Messages API is the entire system-level instruction. There is no hidden base prompt server-side. What prompts.ts produces is all Claude sees.

BlockSource
"You are Claude Code..."getCLISyspromptPrefix()
Main instructions (tool policy, tone, tasks)getSystemPrompt()[0]
Environment infogetSystemPrompt()[1]
Safety guardrails (repeated)getSystemPrompt()[2]
<context name="gitStatus">getContext()
<context name="claudeFiles">same
<context name="readme">same

Why are the safety guardrails repeated at the beginning AND end of the prompt?

Two reasons:

  1. Primacy-recency attention bias — LLMs attend more strongly to content at the start and end of long prompts. Placing guardrails at both positions maximizes their influence.
  2. Sandwich defense — the <context> blocks that follow contain user-controlled content (README, CLAUDE.md, git status). The second guardrail is the last thing before untrusted content enters the prompt. Defense against prompt injection in repo files.

This is best-effort. The real defense-in-depth is: prompt guardrails + permission system + Zod validation + command injection detection + file path restrictions + Docker-only bypass mode.

There's a comment in query.ts that reads like a wizard's rules, something about thinking constraints. What does it actually mean? And why are there two separate mechanisms for thinking?

Decoding the wizard comment, mapping ThinkTool vs extended thinking

The "wizard comment" documents three API constraints the developer learned the hard way:

  1. Thinking requires thinking — if history contains thinking blocks, the next call must have max_thinking_length > 0.
  2. Thinking can't be terminal — a thinking block cannot be the last content block.
  3. Thinking must be preserved — if an assistant turn has thinking + tool_use, those blocks must survive through tool_result and the next response.

And the two mechanisms are mutually exclusive by design:

Extended ThinkingThinkTool
WhereServer-side, before responseClient-side, mid-response tool call
GatingUSER_TYPE === 'ant' onlyStatsig feature gate
CostBilled as output tokensZero — returns a fixed string
Trigger"think" (4K), "think hard" (10K), "ultrathink" (32K)Model decides to call it

If ThinkTool is enabled, maxThinkingTokens returns 0. Extended thinking wasn't available to external users at launch. ThinkTool was the externally-shippable workaround — a feature flag migration pattern: ship the tool-based workaround, gate the native feature, deprecate the workaround once native goes GA.

So what actually happened? Did they deprecate ThinkTool?

Yes. ThinkTool removed entirely in January 2026. Extended thinking went GA, then adaptive (thinking: {type: "adaptive"}) in February 2026. "Ultrathink" was deprecated, then re-introduced in March after users complained about losing the ability to force deep reasoning. The team learned that user agency over reasoning depth matters, even with adaptive mode.

The full arc: tool workaround → API feature (gated) → GA → fixed budget → adaptive → user-controlled effort levels. Simultaneously more automatic and more user-controllable.

How does the AgentTool actually spawn sub-agents? Does it launch a separate process?

Reading AgentTool source — dispatch_agent implementation

No separate process. It calls the same query() function recursively within the same Node.js process:

  • Fresh message history: [UserMessage(prompt)]
  • Leaner system prompt (getAgentPrompt() — ~15 lines vs ~120)
  • Read-only tools only. No Bash, no FileEdit. And no AgentTool itself — "No recursive agents, yet.."

The parent sees only the final text. The entire agent trajectory is compressed into a single tool_result. And because isReadOnly() = true, multiple agents run concurrently via the same pool used for read-only tools.

That comment — "No recursive agents, yet.." — suggested they'd eventually allow nesting. Instead, in v2.1.88, they went a different direction entirely: flat teams over deep trees.

Now compare everything we found in v0.2.8 against the current Claude Code. What changed across the five areas we audited? What stayed the same? And what does each change reveal about what the team learned?

Comparing 5 subsystems across 13 months of evolution

Tools: 15 → 30+

ThinkTool removed. ArchitectTool replaced by EnterPlanMode/ExitPlanMode — planning promoted from a tool to a workflow. Memory tools replaced by auto memory. WebFetch and WebSearch added. Every addition reflects a bottleneck observed in real usage.

Agents: read-only singleton → specialized fleet

One dispatch_agent became five built-in types (Explore on Haiku, Plan on main model, general-purpose with full tool access), custom agents via .claude/agents/, and agent teams with inter-agent messaging. No recursion — flat teams over deep trees.

Permissions: 3-tier → 6-mode governance

Six modes: default, acceptEdits, plan, auto, dontAsk, bypassPermissions. Plus 28 hook events and auto mode — a second AI model reviewing each tool call. AI supervising AI.

Thinking: hidden keywords → adaptive reasoning

ThinkTool removed. Native thinking went GA, then adaptive. "Ultrathink" deprecated then un-deprecated. User agency over reasoning depth matters.

Context: static assembly → dynamic orchestration

3 prompt strings became 110+ assembled dynamically. Context 200K → 1M. CLAUDE.md survives compaction. Auto memory replaced explicit tools.

What's the deepest pattern across all of this?

Every change increases autonomy while simultaneously increasing controllability. Auto mode makes the agent more autonomous. Hooks, rules, managed settings make it more controllable. The team is solving for "maximum useful autonomy within user-defined boundaries" — the central problem of AI deployment.

I keep seeing references to VCR, binary feedback, Statsig, MCP. What are all these supporting systems around the core loop?

Reading services/vcr.ts, REPL.tsx, Statsig integration, MCP client

VCR — "Video Cassette Recorder." Deterministic test fixtures. SHA-1 hashes the dehydrated message history, replays cached responses. Solves the fundamental testing problem: API responses are non-deterministic.

Binary feedback — Ant-only RLHF. Two API calls in parallel, user picks the preferred response. This is why temperature = 1 — not for creativity, but to maximize variation between A/B calls.

MCP — Server configs loaded at startup, tools wrapped as standard Tool objects. MCP tools look identical to built-in tools from the model's perspective.

REPL — Not a readline loop. A React (Ink) component with a state machine managing messages, permissions, binary feedback, and input modes.

How was the v0.2.8 source actually leaked? Can we trace the provenance?

Tracing the leak from npm to inline source map

The bundled cli.mjs on npm contained an inline base64 source map. The full TypeScript source embedded as a data URL at the bottom of the compiled JavaScript. Discovered on February 24, 2025 — launch day.

A missing .npmignore or misconfigured files field in package.json. The extraction is trivial: decode the base64, parse the JSON, read the sourcesContent array. Every original TypeScript file, with comments intact.

THIS IS the exact leakage I need to investigate. Search for the sources, build the plan, update the KNOWLEDGE.md file.

Searching GitHub for v2.1.88 mirrors, cloning, running structural analysis

Found and cloned Exhen/claude-code-2.1.88. The scale difference is staggering:

v0.2.8v2.1.88
Files2111,902
Lines of TypeScript26,000514,587
Top-level src/ dirs935
Tool directories1640

Discoveries that stopped me cold:

  • KAIROS — autonomous daemon mode via cron scheduler, compile-time gated out of external builds
  • BUDDY — virtual pet with 18 species, gacha rarities, RPG stats, ASCII sprites. Species names hex-encoded because "capybara" collides with a model codename in the build scanner
  • Undercover mode — auto-activates for Anthropic employees on public repos, strips all internal info. "There is NO force-OFF."
  • Anti-distillationanti_distillation: ['fake_tools'] injected into API body to prevent competitor model training on outputs
  • 4 sub-agent execution models — fork (cache-sharing), teammate (file mailbox), worktree (git isolation), remote (process isolation)
  • GrowthBook replacing Statsig — richer targeting with subscriptionType, rateLimitTier, organizationUUID

Push the cloned directory into my private GitHub repo. Then download claw-code and tell me, why does this exist?

Handling shallow clone push failure, then analyzing claw-code

Push failed initially — shallow clone had graft points where parent objects weren't downloaded. Fixed by reinitializing the git history from scratch.

claw-code is not a copy of Claude Code. It's a clean-room Python + Rust rewrite. Different language, different legal status, different purpose.

  • 48,599 lines of Rust across 9 workspace crates, built in 3 days
  • 292 commits, March 31 – April 3, 2026
  • Maintained entirely by autonomous AI agents coordinated through Discord
  • PHILOSOPHY.md: "humans set direction; claws perform the labor"

Why: DMCA-proof alternative. The TypeScript leaks are legally risky. A clean-room rewrite in a different language establishes independent creation. Also a demo of autonomous agent capability — 48K lines of systems Rust in 72 hours.

Design the curriculum to understand how Claude Code works, starting from first principles and into the deep dive of deliberate choices made by Anthropic.

Designing 7-module curriculum with reading lists mapping v0.2.8 to v2.1.88

Seven modules, each with ordered reading lists and the design belief it reveals:

  1. The Agentic Loopquery.tsQueryEngine.ts. The recursive async generator.
  2. System Prompt Engineeringprompts.tsbuildEffectiveSystemPrompt(). 3 strings → 110+ dynamic.
  3. The Tool System — 15 inline → 40+ with deferred loading via ToolSearchTool.
  4. Permission & Safety — 3-tier → 6-mode + LLM classifier + 23-check bash scanner.
  5. Memory & Context — flat files → 4-type system + 5-stage compaction stack.
  6. Multi-Agent — read-only singleton → fork/teammate/worktree/remote + coordinator.
  7. Telemetry & Culture — Statsig → GrowthBook + BUDDY + hooks + undercover mode.

Explain the recursive async generator as a mechanism. Use a file system analogy. I want to build the intuition from first principles.

Building from file traversal to the agentic loop

Three building blocks:

  • yield — suspend and hand a value to the caller. The caller controls pacing (backpressure).
  • yield* — delegate to another generator. Pipes — does not collect.
  • async — each await pauses for I/O without blocking.
// COLLECT-FIRST: waits for the entire tree async function getAllFiles(dir): Promise<string[]> { for (const entry of await readdir(dir)) { if (isDir(entry)) results.push(...await getAllFiles(sub)) // blocks else results.push(path) } return results } // STREAM-AS-YOU-GO: each file immediately async function* getAllFiles(dir): AsyncGenerator<string> { for (const entry of await readdir(dir)) { if (isDir(entry)) yield* getAllFiles(sub) // recurse AND pipe else yield path // caller gets this now } }

Mapping to query.ts

File traversalAgentic loop
getAllFiles(dir)query(messages, tools)
readdir(dir)callClaudeAPI()
yield fileyield streamEvent
isDirectory?stop_reason === 'tool_use'?
yield* getAllFiles(sub)yield* query([...msgs, toolResults])
function endsstop_reason === 'end_turn'

Three properties gained: streaming (first token in ~300ms), composability (AgentTool.call() does yield* query(agentMessages) — sub-agent events pipe into the parent with no special casing), and backpressure (Ctrl+C breaks the generator mid-execution, no wasted API calls).

How does prompt caching work differently before and after the DYNAMIC_BOUNDARY marker?

Reading splitSysPromptPrefix() and tracing cacheScope to wire format

SYSTEM_PROMPT_DYNAMIC_BOUNDARY = '__SYSTEM_PROMPT_DYNAMIC_BOUNDARY__'

Before every API call, splitSysPromptPrefix() splits the system prompt into 4 blocks:

Block 1: Attribution header -> not cached Block 2: "You are Claude Code..." prefix -> not cached Block 3: Content BEFORE boundary -> scope: 'global' Block 4: Content AFTER boundary -> not cached

scope: 'global' = cross-org cache. The block is identical for every Claude Code user worldwide. Cached once, shared globally. Everything after — environment info, memory, CLAUDE.md, git status — changes per user, per session.

ScenarioStatic blocksDynamic blocks
Normal (first-party, no MCP)globaluncached
MCP tools presentorgorg
3P providers (Bedrock/Vertex)orgorg

MCP tools break global caching because schemas are per-user. Source comment: "IMPORTANT: Do not add any more blocks for caching or you will get a 400."

The boundary marker is load-bearing: moving content before it silently corrupts other users' caches.

What are all the categories of system prompts and how are they prioritized? How do they interact differently?

Reading buildEffectiveSystemPrompt() — full priority chain with 6 categories

Six categories, strict priority. First match wins:

Priority Category Effect 0 Override REPLACES ALL + drops append 1 Coordinator REPLACES default, keeps append 2a Agent (normal) REPLACES default, keeps append 2b Agent (proactive) APPENDS to default 3 Custom REPLACES default, keeps append 4 Default Base prompt + Append Always last (except priority 0)

The non-obvious case: proactive/KAIROS mode. Normal agents replace the default entirely. But when isProactiveActive() is true, agent instructions append to the default. The daemon already has the right base identity — the agent definition adds domain-specific behavior on top.

Two axes: identity (what kind of agent? winner-takes-all) and constraint (what additional rules? always composable via appendSystemPrompt). Override dropping append is intentional: loop mode needs total determinism.

Reading cronScheduler.ts (531 lines), autoDream.ts (325 lines), ultraplan.tsx, ccrSession.ts, teleport/

KAIROS — The Daemon

A non-React cron scheduler core shared by the REPL and Agent SDK. Polls scheduled_tasks.json every second. Lock-based ownership — only one Claude session per project fires tasks. Non-owners probe every 5 seconds to take over if the owner crashes.

  • GrowthBook killswitch — stops already-running schedulers mid-session, not just new ones
  • Jitter config — ops can widen the window live during a :00 load spike without restarting clients
  • Missed tasks — wraps prompts in a code fence whose length is max(3, longestBacktickRun + 1). Prevents a malicious task prompt from escaping the fence. Requires user confirmation before execution.

autoDream — Memory Consolidation

Triple gate (cheapest first): time ≥24h → sessions ≥5 → lock. Fires a forked subagent with bash restricted to read-only commands. Reviews past transcripts, improves memory files. User can kill from the bg-tasks dialog — lock rollback handles cleanup.

ULTRAPLAN — Remote Opus Planning

/ultraplan creates a Cloud Code Runner session, runs Opus in plan mode with a browser UI. The local repo is teleported via a git bundle:

  1. git bundle create --all — full history (100MB max)
  2. git bundle create HEAD — current branch only
  3. git commit-tree HEAD^{tree} — single parentless commit, just the snapshot

The teleport sentinel — finally answered:

ULTRAPLAN_TELEPORT_SENTINEL = '__ULTRAPLAN_TELEPORT_LOCAL__'

When the user clicks "teleport back to terminal" in the browser, it sends an is_error: true tool_result with the sentinel on the first line, plan text on the next. Two exit paths: teleport (plan comes back to terminal, remote archived) or remote (CCR executes, delivers a PR). The ExitPlanModeScanner is a pure stateful classifier with no I/O — feed it recorded events for unit tests and offline replay.

Log the entire conversation history to KNOWLEDGE.md and draft a blog article comparing the two versions. Trace the minds of the creators of Claude Code.

Updated KNOWLEDGE.md — now 1,800+ lines across 14 sessions. Seven design beliefs extracted from the code:

  1. The Loop Is the Product — recursive async generator, unchanged across versions
  2. Safety Is the Architecture — 3-tier → 6 modes + LLM classifier + invisible sandbox bypass
  3. Context Is the Scarce Resource — 1-stage compaction → 5-stage stack; tool deferral via ToolSearch
  4. One Model Is Not Enough — single agent → 4 execution models + coordinator
  5. The Boundary Is Architectural — compile-time dead-code elimination, undercover mode
  6. Shipping Is a System — feature flags + kill switches + GrowthBook
  7. Joy Is Not Optional — BUDDY, 187 spinner verbs, hex-encoded species names

Core thesis:

An agentic coding tool should be a reliable system, not a capable demo. v0.2.8 was the capable demo. v2.1.88 is the beginning of the system.

Publish it.

Published to twyoon.com.

14 sessions. Two source trees. 540,000 lines of TypeScript. Seven beliefs. One conclusion: the code is the artifact. The thinking is the product.