01 — Setup
First Launch
-
1
Navigate to your project
Open your terminal and cd into the root of the repo you're working in. Claude Code reads from your current directory.
-
2
Launch Claude Code
Type claude and authenticate if prompted. It will scan and index your project files automatically.
-
3
Run /init first — always
Generates your CLAUDE.md file. Review it and add engine-specific context it couldn't infer. Commit it to the repo.
-
4
Write your spec before you prompt
Never open Claude Code to figure out what you want to build. Plan first. Prompt second. (See Section 03.)
Terminal
$cd ~/projects/engine-core
$claude
✓ Authenticated · 847 files indexed
⚠ No CLAUDE.md — run /init
> /init
✓ CLAUDE.md created — review before committing
💡
CLAUDE.md = free context. Every convention you add is loaded at the start of every session — without burning a single token re-discovering it.
Essential commands
| /init |
Generate CLAUDE.md |
| /plan |
Plan before executing |
| /help |
Show all commands |
| y / n |
Approve / reject changes |
| Ctrl+C |
Interrupt execution |
| exit |
End session |
02 — Token Efficiency
Think Surgically, Not Conversationally
Every file Claude Code reads, every back-and-forth, every mid-session course correction costs tokens. Treat each session like a surgical procedure: go in with a plan, execute the specific task, close the session cleanly.
❌ Exploratory
"Look at the physics module and see if there's anything to improve"
→ Reads 8 files
→ 3 back-and-forths
→ Context fills with noise
✓ Spec-Driven
"In collision_resolver.cpp, rewrite broadphase_check() using a spatial hash. Do not modify other files."
→ Reads 1 file
→ Executes once
→ Clean result
The Three-Phase Discipline
Before
- ✎ Write spec first
- 📁 Load only needed files
- 📋 Verify CLAUDE.md
During
- 📐 Use /plan first
- 🚧 Set file boundaries
- ↩ Reset if off-track
After
- 👁 Review scope creep
- 📝 Update CLAUDE.md
- 📚 Log to prompt library
⚠️
Daily budget: ~20,000 tokens per developer. An exploratory 90-min session can burn 4× that. Spec first. Always.
When NOT to use Claude Code
For trivial, well-understood tasks where you already know exactly what to write, just write it. The overhead of a Claude Code session (spec, launch, review) isn't worth it for a 5-line change you could type in 2 minutes.
03 — Spec-Driven Workflow
The Spec Template
Copy this template. Fill every field before you open Claude Code. The Out of Scope section is the most important — undefined boundaries cause scope explosion.
## Goal
What this code needs to accomplish in plain language.
One task, not three.
## Context
- Relevant files: [file1, file2]
- Reference pattern in: [file with pattern]
- Framework / test suite: [e.g. GoogleTest]
## Constraints
- Language: [C++20] · No new dependencies
- Do NOT modify: [locked files]
- Error pattern: [VortexResult<T>]
## Acceptance Criteria
- [Condition 1 returns X]
- [Condition 2 returns Y]
- All new code has test coverage
## Out of Scope
- Do not refactor adjacent code
- Do not modify files not listed above
- Do not add logging
✅
Always run /plan before executing. It shows you exactly which files Claude Code will touch and what steps it will take — before a single line is written. Correct the plan, then proceed.
Scope creep rule
If Claude Code produces out-of-scope output that looks useful: accept only the scoped changes, capture the idea, spec it as a separate task. Never accept unsolicited changes to files not in your spec.
04 — Recovery
When Things Go Wrong
Every developer hits failures with Claude Code. Knowing how to recover — not avoiding failure — is what makes adoption stick.
Common failure modes
Vague prompt
Claude Code over-interprets scope and modifies files you didn't intend. Fix: tighter spec with explicit boundaries.
Skipped /plan
No checkpoint before execution. Changes arrive without approval. Fix: always /plan on complex tasks.
Missing context
CLAUDE.md is outdated or missing engine-specific conventions. Output doesn't match team standards. Fix: update CLAUDE.md.
Context overload
Long session accumulates noise; output quality degrades. Fix: reset. Start a new session.
The 4-step recovery protocol
01
Stop & Reject
Type n or close the session. Do not try to fix mid-stream. Reject all proposed changes.
02
Diagnose
Was the prompt vague? No /plan? Missing constraints? CLAUDE.md outdated? Identify the root cause.
03
Write the Spec
If you skipped it, write it now. 5 minutes of spec writing costs less than re-running a broken session.
04
New Session, /plan First
Start fresh. Paste the spec. Run /plan before executing. The new session will cost a fraction of the failed one.
🚫
Never try to correct a derailed session mid-stream. Correction prompts add noise to an already-cluttered context and compound the token waste. Reset is always cheaper.
05 — Prompt Library
Starter Prompts — Engine Team
Copy, swap the [PLACEHOLDERS], and paste into Claude Code after writing your spec. Run /plan first on any complex task.
"In [FILE], add input validation to [FUNCTION] using our VortexResult<T> pattern (reference: networking/lobby_manager.cpp). Validate: [CONDITIONS]. Write GoogleTest unit tests in [TEST_FILE]. Do NOT modify [FILE].h or any other files."
"Write GoogleTest unit tests for [FUNCTION] in [SOURCE_FILE]. Cover: [CASES]. Add tests to [TEST_FILE] only. Do not modify source files. Follow conventions in tests/README.md."
"Read [MODULE_DIRECTORY] and give me: (1) what this module does, (2) its main entry points and data flow, (3) any non-obvious patterns or dependencies I should know before modifying it. Do not suggest changes."
"Add documentation comments to all exported functions in [FILE]. Document: what each function does, parameters, return values, and edge cases. Do NOT change any logic or formatting. Modify [FILE] only."
"In [FILE], the function [FUNCTION] has [PROBLEM e.g. O(n²) complexity]. Refactor using [APPROACH]. Use our existing [UTILITIES]. Do not modify other files or change the function signature."
"Read the changes in [MODIFIED_FILES] and write a pull request description. Include: what changed and why, what was explicitly left out of scope, and how to test it. Keep it under 200 words."
📚
Contributing to the library: When a prompt works well, post it in #ai-prompts with the token count and what task it solved. Include a "when NOT to use this" note. Curated entries get added to the AI Hub wiki monthly. Your failure discoveries are just as valuable as your wins.