Skip to content

Claude Code as a Daily Driver: An Honest Take from a Principal Engineer

Posted on:May 5, 2026

Welcome, Developer 🖖

I want to be upfront about something before we start: I’m not going to tell you Claude Code is magic. I’m also not going to tell you it’s just another autocomplete tool dressed up in a terminal window. The truth is somewhere more interesting than either of those takes — and it took me a few weeks of actual daily use to find it.

So here’s how I actually use it. What changed. What didn’t. And the moments that made me stop and think “okay, this is different.”


The Setup

I run Claude Code straight from the terminal inside VS Code’s integrated shell. No extensions, no wrappers. Just:

claude

That’s it. You’re in an agentic session where Claude can read your files, run commands, write code, and iterate — all without you copy-pasting snippets back and forth. If you haven’t set it up yet, install it with the native script:

curl -fsSL https://claude.ai/install.sh | bash

Then run claude and it opens a browser window for OAuth — authenticate, jump back to the terminal, and you’re in. The first time I ran it inside my React Native project directory and watched it read package.json, tsconfig.json, and three source files on its own before answering my question, I realised this wasn’t the tool I thought it was.


The First Thing You Should Actually Do: CLAUDE.md

Before you write a single prompt, create a CLAUDE.md file in your project root. This is the thing most tutorials skip, and it’s the difference between Claude Code feeling like a smart stranger and feeling like a teammate who’s been on the project for months.

CLAUDE.md is a markdown file that Claude reads at the start of every session. You put your project context in there — conventions, architecture decisions, things to never touch, how to run tests. It persists across sessions so you’re not re-explaining your codebase every time.

Here’s a trimmed version of what mine looks like:

# Project: Mobile App
 
## Stack
React Native, Expo, TypeScript, Node.js, Azure, MySQL
 
## Conventions
- All API calls go through /services — never fetch directly from a component
- Use named exports only, no default exports in /services
- Tests live in __tests__ and mirror the source file structure
 
## Do Not Touch
- /services/auth — owned by the auth squad, raise a PR for review
- app.config.ts — environment config, changes require team sign-off
 
## Running Tests
npm test — runs Jest in watch mode
npm run test:ci — full suite, used in the pipeline

That’s it. Nothing fancy. But now when I start a session and say “add error handling to the course service,” it already knows where services live, what the conventions are, and what’s off limits. I don’t repeat myself. It doesn’t guess.

Update it as the project evolves. Treat it like living documentation.

One more thing worth knowing: Claude Code also builds memory automatically as it works — saving things like build commands and debugging insights across sessions without you writing anything. CLAUDE.md is what you control explicitly. Auto memory is what it picks up on its own. Together they mean you spend less time re-explaining your project every time you open a session.


My Actual Workflow

My stack is React Native with Expo, TypeScript, Node.js, Azure, and MySQL. Not a greenfield toy app — a production mobile application with real users and real consequences when things break.

Here’s what a typical morning looks like.

I open the terminal, navigate to the project root, and kick off a Claude Code session. The first thing I do is give it context — not a vague “hey help me code” but a focused, scoped brief:

Look at src/transformers/course.ts — the API response shape doesn't match our 
CourseItem type. Here's the error from the last test run. Update the transformer 
to handle it. Don't touch the type definition and don't touch anything in /services/auth.

That last line matters. Scoping what Claude Code can’t touch is just as important as telling it what to do. It’s not a junior dev who needs hand-holding — it’s a fast, capable engineer who will confidently make changes across your codebase if you don’t set boundaries.

I had a transformer function that needed to handle nullable fields from the API response. Instead of me reading the API docs, writing a draft, testing it, and iterating — I pointed it at the file and said:

Update src/transformers/course.ts to handle these nullable fields. 
Add a unit test for the null case in __tests__/transformers/course.test.ts — 
match the style of the existing tests in that file.

It read the existing tests, matched the style, wrote the transformer update, and added the test. Four minutes. I reviewed, tweaked one field name, and shipped.


Where It Changed How I Think

The biggest shift for me wasn’t about speed. It was about the type of work I do.

Before Claude Code, my day had a lot of what I’d call “translation work” — turning a clear idea in my head into boilerplate TypeScript, wiring up a new service, writing the tenth variation of a data-fetching hook. I knew exactly what needed to happen; the work was just execution.

That execution work is almost entirely gone now. Which means my days are heavier on design decisions, architecture tradeoffs, and reviewing output — not writing it from scratch. That’s not a bad thing. That’s actually what I was hired to do.

Here’s a concrete example. We needed a reporting feature that pulled vehicle appraisal data through MySQL views into Metabase. I had the business logic clear in my head. In the past, I’d have written the view SQL, tested it, found the performance issue, fixed it, documented it. Half a day, maybe more.

Instead, I described the reporting requirements and said:

Read db/schema/appraisals.sql — that's the relevant part of the schema.
Write a MySQL view for this reporting requirement.
The view will be queried by Metabase so it needs to be read-optimised.
Flag any potential performance issues before I run it.

It flagged a potential 85-million-row comparison problem before I even ran the query — and suggested a LATERAL join approach. I wouldn’t have caught that until the first time Metabase timed out on me.


Real Prompts That Actually Work

A lot of “how to use AI” content gives you abstract prompt tips. Here are prompts I’ve actually used in production work:

Reviewing a PR diff:

Run: git diff main -- src/services/notifications.ts
Read the output. Tell me what changed, whether it could break the existing 
auth flow, and flag anything that looks risky. Don't fix anything — 
just give me a risk assessment.

Debugging a test failure:

Look at src/components/CourseCard.tsx and __tests__/components/CourseCard.test.tsx.
The second test is failing intermittently in CI but passes locally.
My hypothesis is a timing issue with the async state update.
Tell me if you agree and why, then propose a fix.

Writing documentation:

Read src/features/push-notifications — the full folder.
Write a Confluence-style handoff note for this feature.
Audience is a mid-level developer who hasn't worked on this module.
Cover: what it does, how to test it locally, and what to watch out for in production.

That last one I used verbatim when I had a two-week holiday coming up. The handoff page it produced saved me about two hours of writing.


Git Is a First-Class Citizen

This one surprised me. Claude Code doesn’t just write code — it works directly with git.

I can say “commit my changes with a descriptive message” and it stages the right files, writes the commit message, and runs it. I can say “open a PR for this feature” and it creates the branch, pushes it, and drafts the PR description. For a team that cares about clean commit history, that’s not a small thing.

The one I use most is passing a diff directly into a prompt:

git diff main --name-only | claude -p "review these changed files for any security issues"

That Unix pipe approach opens up a lot. Logs, diffs, file lists — anything you can pipe in, Claude Code can work with. I’ll do a dedicated post on CLI composition because there’s enough there to fill one. But the git integration alone changed how I close out features.


Where It Doesn’t Shine

Honesty time.

The first practical wall you’ll hit is context. Claude Code has a context window, and on a long session — debugging a gnarly issue, refactoring a large file, going back and forth on a feature — you’ll fill it. When that happens, use /compact. It summarises the session so far and keeps you going without losing the thread. For a completely fresh task, I just start a new session rather than dragging stale context forward. It’s a habit you’ll develop fast.

The / commands in general are worth learning early. /clear resets the session, /help shows what’s available, /compact is the one you’ll use most. They’re small things but they’re part of actually using the tool versus just knowing it exists.

Claude Code is also not great when your codebase has a lot of implicit tribal knowledge. If the naming conventions are inconsistent, if there are “don’t touch this for reasons nobody documented,” if your test coverage is patchy — it will make confident decisions based on incomplete information. That confidence can bite you. This is why CLAUDE.md matters so much: the more context you give it upfront, the less it has to guess.

I’ve also found it less useful for pure architectural thinking. “Should we use a monorepo or keep these two services separate?” is not a question Claude Code answers better than a good engineering conversation with your team. It’ll give you a thorough answer, but the answer will lack the weight of knowing your specific team dynamics, your deployment constraints, your org politics. Context it can’t read.

And one thing that takes adjustment: you have to resist the urge to accept output without reading it. The code it produces looks right. It follows your patterns. It compiles. But I’ve caught logic errors that passed a quick scan because I trusted the output too much. Review everything, especially when it’s touching critical paths.


The Thing That Actually Changed

Here’s what I didn’t expect: Claude Code made me a better reviewer.

Because I’m generating first drafts faster, I spend more time in review mode — reading code critically, asking “is this the right abstraction,” catching edge cases. That muscle has gotten sharper. And because I write more explicit, scoped prompts to get good output, I’ve also gotten better at writing clearer tickets and specs for my own team.

The discipline that makes you good at prompting — being specific, scoping the problem, stating constraints upfront — is the same discipline that makes you a good engineer. Turns out they reinforce each other.

I didn’t expect that. But it’s the thing I’d tell another principal engineer who’s on the fence about whether this tool is worth the learning curve.

It is. Just don’t skip the learning curve.


Stay focused, Developer!