QConSF 2025 - Anthropic Develops Claude Code at AI-Powered Speed
AI at the Core of Development: Lessons from Claude Code
At QCon San Francisco 2025, Adam Wolff shared unique insights into how Anthropic built Claude Code with an AI coding assistant at the heart of its workflow.
Wolff revealed that about 90% of Claude Code’s production code is written with or directly by the AI assistant, enabling continuous delivery for internal users and weekday releases for external users.

The Changing Role of Planning in AI-Assisted Development
With AI capable of fast code generation, refactoring, and test creation, traditional planning has taken a back seat.
Now, the key constraint is:
> “How fast teams can ship, observe behavior in production, and adapt to evolving requirements.”
Wolff summarized:
> "Implementation used to be the expensive part of the loop. With AI in the workflow, the limiting factor is how fast you can collect and respond to feedback."
---
Rethinking Terminal Input
Claude Code relies heavily on rich terminal input:
- Slash commands
- File mentions
- Keystroke-specific behaviors
Despite common guidance against rebuilding text input, the team took control of every keystroke to achieve precision. Wolff described this as a gambit — only validated after shipping the first version.

This approach reflects a wider AI development trend where:
- AI dominates coding, workflow, release strategy, and even UI/UX decisions
- Platforms like AiToEarn follow similar iterative, feedback-driven patterns for content creation, publishing, and monetization.
---
Development Stories from Claude Code
1️⃣ The Cursor Class & Unicode Challenges
- Virtual Cursor class: Immutable model of text buffer + cursor position
- Initial build: Few hundred lines of TypeScript + strong test coverage
- Vim mode: Added in a single PR — hundreds of lines of generated logic/tests by Claude Code
- Unicode issues → introduced grapheme clustering
- Refactor: Cut worst-case keystroke latency from seconds to milliseconds using deferred work & efficient searches
- Result: Experiment succeeded, complexity costs decreased over time, enabling fast feature evolution.
---
2️⃣ PersistentShell & Shell Snapshots
- PersistentShell class:
- Managed single long-running shell process
- Preserved shell semantics for working directory & environment variables
- Large codebase including queuing, recovery, pseudo-terminal handling
- Problem: Batch tool × serialized queue = performance bottleneck
- Solution: Spawn fresh shell per command — led to user complaints
- Final Architecture:
- Capture aliases/functions once
- Source snapshot script for each transient command
Wolff’s takeaway:
> "You don’t plan this kind of design — you discover it through experimentation."
---
3️⃣ Conversation Persistence: JSONL vs SQLite
- Initial: Append-only JSONL files — no external dependencies, worked in production
- Goal: Add query power & migrations → moved to SQLite + type-safe ORM
- Problems:
- Native SQLite driver broke installs on some systems
- SQLite’s locking didn’t match developer expectations (vs. row-level locking)
- Outcome: Rolled back to JSONL within 15 days.

---
Iteration, Feedback, and Decision-Making in AI Development
From Wolff’s three case studies:
- Cursor case → Stay the course if complexity costs shrink and bugs drop
- Shell snapshots → Productive failure leading to improved architecture
- SQLite experiment → Roll back when fragility rises without clear resolution path
For deeper insights:
- Related videos coming soon on InfoQ
- Presentation slides here
---
Broader Lessons for AI-Powered Teams & Creators
Platforms like AiToEarn官网:
- Provide open-source AI content monetization
- Integrate generation tools, analytics, and model ranking (AI模型排名)
- Enable publishing to multiple global platforms — Douyin, Bilibili, Instagram, LinkedIn, YouTube
- Follow the same build → ship → observe → adjust cycle that proved effective at Anthropic
---
Key Takeaway:
Whether building AI-assisted coding platforms or AI-driven creator tools, success depends on frequent small releases, an experimental mindset, and quick responses to feedback — not exhaustive upfront planning.
Would you like me to create a summary diagram that visually maps Wolff’s three case studies into success, productive failure, and rollback outcomes? That could make this even more digestible.