We Might All Be AI Engineers Now
I enjoy writing code. Let me get that out of the way first.
The problem solving, the architecture decisions, the feeling when something clicks into place. That hasn’t changed. What has changed is everything around it.
Lately I’ve been spending most of my time writing agents and tools. Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
Everyone knows the models are good now. That’s not news. But most people still miss the point. They see AI-generated code, call it slop, and move on. Sure, unguided, it is slop. But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided. When you know what you want. When you know what architecture to reach for. When you understand the tradeoffs and can articulate them clearly. The game goes on easy mode.
I’m building something right now. I won’t get into the details. You don’t give away the idea. But it involves concurrent graph traversal, multi-layer hashing strategies, AST parsing, and file system watchers all wired together. That’s not a weekend hack. But the AI is writing the traversal logic, the hashing layers, the watcher loops, while I design the architecture and decide how the system should behave when state changes propagate. I’m shipping in hours what used to take days. Not prototypes. Real, structured, well-architected software.
Debugging? Debugging is on steroids now. I run multiple agents at once, feed them my thinking. Here’s what I suspect, here’s where I’d look, here’s what doesn’t make sense. They fan out and dig. It’s like having my problem-solving instincts multiplied across five brains at the same time. I still drive the intuition. The agents just execute at a speed I never could alone.
I haven’t written a boilerplate handler by hand in months. I haven’t manually scaffolded a CLI in I don’t know how long. I don’t miss any of it.
The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
When someone without that intuition tries the same thing? They get spaghetti. Code that compiles but doesn’t scale. An architecture that falls apart the moment you add a second requirement. The model doesn’t save you from bad decisions. It just helps you make them faster.
... continue reading