Tech News
← Back to articles

Getting Good Results from Claude Code

read original related products more articles

I've been experimenting with LLM programming agents over the past few months. Claude Code has become my favorite.

It is not without issues, but it's allowed me to write ~12 programs/projects in relatively little time, and I feel I would not have been able to do all this in the same amount of time without it. Most of them, I wouldn't even have bothered to write without Claude Code, simply because they'd take too much of my time. (A list is included at the end of this post.)

I'm still far from a Claude Code expert, and I have a backlog of blog posts and documentation to review that might be useful. But — and this is critical — you don't have to read everything that's out there to start seeing results. You don't even need to read this post; just type some prompts in and see what comes out.

That said, because I just wrote this up for a job application, here's how I'm getting good results from Claude Code. I've embedded links to some examples where appropriate.

A key is writing a clear spec ahead of time, which provides context to the agent as it works in the codebase.

Having a document for the agent that outlines the project’s structure and how to run e.g. builds and linters is helpful.

Asking the agent to perform a code review on its own work is surprisingly fruitful.

Finally, I have a personal “global” agent guide describing best practices for agents to follow, specifying things like problem-solving approach, use of TDD, etc. (This file is listed near the end of this post.)

Then there's the question of validating LLM-written code.

AI-generated code is often incorrect or inefficient.

... continue reading