Meet Marcus —
and recognize yourself.
Before we get into tools and workflows, let's set the stage. This module follows a real scenario based on engineers just like you.
Marcus Reid, Senior Software Engineer
8 years at Vortex Studios. Knows the engine inside out. Has opinions about tools. Skeptical that AI is anything more than autocomplete with better PR.
Monday Morning, 9:02 AM
Vortex Studios — Global HQ, Game Engine Team
Marcus opens Slack to find a message from his tech lead:
Marcus stares at the message. Great. Another tool nobody asked for.
He's seen this before. New platform, big announcement, three months later everyone goes back to doing it the old way. He's not opposed to the idea of AI assistance — he's just not convinced it's actually going to change how he works.
But the message says mandatory. So here we are.
Orientation &
First Launch
Marcus opens his terminal for the first time with Claude Code installed. Here's what he needs to understand before he types a single prompt.
Marcus opens his terminal
First Claude Code session — engine-core project directory
Marcus navigates to the engine-core repository and types claude to launch his first session. Claude Code scans the directory.
Marcus types /init to generate a starter CLAUDE.md. Claude Code analyzes the codebase and produces a draft.
Context &
Token Efficiency
The single biggest mistake new Claude Code users make — and how to avoid it. Marcus is about to learn the hard way.
Tuesday afternoon — Marcus hits a wall
He's been in a Claude Code session for 90 minutes. Something feels off.
Marcus has been exploring — asking Claude Code to explain modules, suggest improvements, try different approaches. It's been interesting. But his tech lead pings him:
Marcus hadn't thought about it. He was just… talking to it. Figuring things out.
The Mental Model: Sessions as Surgery, Not Conversation
Think of each Claude Code session as a surgical procedure — not an open-ended chat. You go in with a clear plan, execute the specific task, and close the session. The longer and more exploratory a session runs, the more tokens burn and the more noise accumulates in context.
→ Claude reads entire physics module (8 files)
→ Generates broad suggestions
→ You ask follow-ups
→ It tries 3 different approaches
→ Context fills with dead ends
~18,000 tokens • Low predictability
→ Claude reads 1 file + Vec3 header
→ Executes exactly what's asked
→ Requests approval before saving
→ Session closes cleanly
~2,400 tokens • High predictability
The Three-Phase Discipline
The
Spec-Driven Workflow
Marcus writes his first real spec. This is the skill that separates developers who get results from developers who burn budget and give up.
Wednesday — Marcus has a real task
Tech lead assigned: add input validation to the player session initializer
Marcus has a concrete task: the PlayerSession::Initialize() function in the networking layer doesn't validate its inputs. He needs to add validation and unit tests.
He remembers Tuesday's lesson. Before he opens Claude Code, he writes a spec.
The Spec Template — What Every Prompt Needs
Marcus uses Plan Mode first
Scenario: Marcus gets a decision to make
ValidateSessionToken() that wasn't in the spec — but it looks genuinely useful and well-written. What should Marcus do?
When Things
Go Wrong
Every developer hits a failure with Claude Code. The difference between those who adopt it and those who don't is knowing how to diagnose and recover — not avoiding failure entirely.
Thursday — Marcus hits a real problem
The session spirals. Claude Code produces confidently wrong output.
Marcus has a more complex task today: refactor the audio spatial positioning system to support the new 3D listener API. He's in a hurry — standup is in 40 minutes — and skips writing a proper spec. He types directly into Claude Code:
2. No Plan Mode → no checkpoint. Without /plan, Marcus had no opportunity to see the scope before it happened.
3. Token burn from over-reading. Loading 6 files when 2 were relevant cost ~12,000 tokens before a single line was written.
The Recovery Protocol
When a session goes off-track, the instinct is to keep correcting mid-stream. This is almost always wrong. Correction prompts add more noise to an already-cluttered context, burning more tokens and producing increasingly inconsistent output. The right move is to reset.
Your
Prompt Library
The prompt library is the team's institutional memory for working with AI. By Friday, Marcus has something to contribute — and a system to benefit from.
Friday — Marcus has learned something worth sharing
And the prompt library is where it lives permanently
By the end of his first week, Marcus has discovered something he didn't expect: when Claude Code works, it works remarkably well. The PlayerSession validation task on Wednesday took 22 minutes start-to-finish — including spec writing. Without Claude Code, he estimates it would have taken 90 minutes.
He also discovered what doesn't work — and that's equally valuable. The team's prompt library needs both.
What a Good Prompt Library Entry Looks Like
Where the Library Lives — and Why It Matters
Module Complete
You've completed Claude Code: Your First Week. Marcus went from skeptic to contributor in five lessons. Now it's your turn.
What you covered
This sprint: Write a proper spec before your next Claude Code task. Use /plan. Track your token usage before and after.
This month: Contribute one prompt library entry based on something that worked for you.