Tech News
← Back to articles

Ask HN: How are you LLM-coding in an established code base?

read original related products more articles

Here’s how we’re working with LLMs at my startup.

We have a monorepo with scheduled Python data workflows, two Next.js apps, and a small engineering team. We use GitHub for SCM and CI/CD, deploy to GCP and Vercel, and lean heavily on automation.

Local development: Every engineer gets Cursor Pro (plus Bugbot), Gemini Pro, OpenAI Pro, and optionally Claude Pro. We don’t really care which model people use. In practice, LLMs are worth about 1.5 excellent junior/mid-level engineers per engineer, so paying for multiple models is easily worth it.

We rely heavily on pre-commit hooks: ty, ruff, TypeScript checks, tests across all languages, formatting, and other guards. Everything is auto-formatted. LLMs make types and tests much easier to write, though complex typing still needs some hand-holding.

GitHub + Copilot workflow: We pay for GitHub Enterprise primarily because it allows assigning issues to Copilot, which then opens a PR. Our rule is simple: if you open an issue, you assign it to Copilot. Every issue gets a code attempt attached to it.

There’s no stigma around lots of PRs. We frequently delete ones we don’t use.

We use Turborepo for the monorepo and are fully uv on the Python side.

All coding practices are encoded in .cursor/rules files. For example: “If you are doing database work, only edit Drizzle’s schema.ts and don’t hand-write SQL.” Cursor generally respects this, but other tools struggle to consistently read or follow these rules no matter how many agent.md-style files we add.

My personal dev loop: If I’m on the go and see a bug or have an idea, I open a GitHub issue (via Slack, mobile, or web) and assign it to Copilot. Sometimes the issue is detailed; sometimes a single sentence. Copilot opens a PR, and I review it later.

If I’m at the keyboard, I start in Cursor as an agent in a Git worktree, using whatever the best model is. I iterate until I’m happy, ask the LLM to write tests, review everything, and push to GitHub. Before a human review, I let Cursor Bugbot, Copilot, and GitHub CodeQL review the code, and ask Copilot to fix anything they flag.

... continue reading