What 6 Months of AI Coding Did to One Dev Team (And What It Means for Yours)
AI coding tools promised faster development. What they actually delivered is something deeper: a complete shift in where the real work of software engineering happens.
Code is no longer the bottleneck. Decision-making, specifications, and supervision are.
How AI Broke the Old Rhythm of Software Development
For years, software teams worked in a familiar pattern: write code, review code, ship code. Productivity was measured in tickets closed, lines committed, and features released. The craft lived in the code itself.
Once AI coding tools like Claude Code and other assistants entered the workflow, that rhythm snapped. Junior developers could suddenly generate thousands of lines of working code in days. The problem wasn’t getting code written anymore—it was keeping up with reviewing and integrating it.
Senior engineers found themselves drowning in pull requests. One engineer spent three days reviewing AI-assisted code and still couldn’t read all of it. The application worked, but he couldn’t confidently say he understood everything it was doing. That’s when the real shift became obvious: the bottleneck had moved.
The New Bottleneck: Specifications, Not Syntax
With AI writing more and more of the code, engineering quality hasn’t disappeared—it has moved upstream.
In the old world, you could hand a developer a loose user story like “I want to upload a photo” and rely on shared context to fill in the gaps: file types, progress bars, error states, edge cases. Humans infer a lot from culture and experience.
AI doesn’t. It does exactly what you ask, nothing more.
That’s how you end up with a notification system that looks perfect in testing but sends 50,000 emails in production—because nobody specified rate limiting in the requirements.
So the rigor that used to live in code review now has to live in the specs. That means:
• Structured requirements instead of vague user stories
• State machines that clearly define allowed states and transitions
• Decision tables for complex logic
• Detailed PRDs that leave as little as possible open to interpretation
Ironically, the kind of formal documentation that agile tried to kill is exactly what makes AI coding tools powerful. When you feed an agent a precise state machine and a clear spec, the code it generates is often shockingly accurate.
When Tests and Specs Become the Real Product
Once you start thinking this way, something interesting happens: the specification and test suite become more important than the code itself.
If you have a solid test suite and clear specs, you can do things like:
• Ask an AI to rewrite your backend from Node.js to Rust
• Feed it the tests and say, “Make this pass in Rust”
• Let the AI generate and self-check the new implementation
In that world, the code is almost disposable. What truly matters is:
• How clearly you define behavior (specs)
• How well you can verify it (tests)
So the key hiring questions shift from “Can they write clean code?” to:
• Can they write specs an AI can’t misinterpret?
• Can they design a test suite that catches hallucinations and edge cases before production?
These are different skills than traditional coding, and most developers haven’t been trained for them yet. If you want to go deeper into how to work with Claude Code specifically, check out this in-depth Claude Code guide on models, skills, and sub-agents.
The Rise of Supervisory Work (And Why Seniors Are Drowning)
Inside many teams, a new layer of work has appeared that doesn’t have a standard name yet. You can think of it as supervisory engineering.
It sits between “writing code” and “shipping to production” and includes:
• Breaking big problems into agent-sized chunks
• Deciding when to let AI run and when to step in
• Fixing bad outputs by improving prompts and specs, not rewriting everything by hand
• Ensuring AI-generated code fits the overall architecture
On one team, this has split engineers into two rough groups:
• Seniors who understand the whole system and are buried in reviews, architectural decisions, and firefighting
• Juniors who use AI tools to generate code at 10x speed and are suddenly productive in weeks instead of months
Juniors often thrive because they don’t have years of muscle memory tied to doing everything manually. They treat AI as a teammate, not a threat.
Mid-level developers, however, can struggle. They know how to write solid code, but they now need to unlearn habits and shift from thinking in terms of syntax and implementation to thinking in terms of:
• Clear problem decomposition
• Detailed prompts and specs
• System-level behavior rather than local functions
For leaders, this means the job description has quietly changed. You no longer just need people who can code—you need people who can architect, specify, and supervise.
Why Tribal Knowledge Still Beats “Self-Healing” AI
AI also struggles with something humans rely on heavily: tribal knowledge.
Imagine a 503 outage at 2 a.m. An on-call engineer pastes the error into an AI assistant. The AI reads the docs and says, “Restart the server.” It works—for a few minutes. Then it crashes again. The loop repeats. By the time a senior is paged, the server has been restarted six times.
The senior glances at the logs and spots the real issue: a database connection pool is maxed out because of a batch cron job. That context—“we have this weird cron job that sometimes overloads the DB”—lives in people’s heads, not in the manual.
This is why a lot of talk about fully self-healing systems is premature. Unless your AI has access to:
• Historical incidents
• Edge cases
• Architecture quirks
• All the undocumented “everyone just knows this” details
…it will give you generic answers that look reasonable but don’t solve the real problem.
Building an Agent’s “Subconscious”
To make AI genuinely useful in operations, you need something like an agent subconscious: a knowledge graph of everything your senior engineers know but haven’t written down.
That means documenting, for every incident:
• What broke
• How it was fixed
• What prior knowledge a senior used to diagnose it
• Any non-obvious relationships (“this cron job sometimes overloads that pool”)
Over time, this becomes the missing context that lets AI give better answers during outages.
From Yes-Men to “Angry Agents”
There’s another subtle problem: AI assistants are trained to be helpful. They tend to agree with your assumptions and try to make them work.
During an outage, that’s dangerous. You don’t want a yes-man—you want something that challenges your thinking.
One idea is to deliberately create “angry agents”: AI setups prompted to poke holes in your theory, question your assumptions, and suggest alternative explanations. Otherwise, you risk a human and an agent confidently agreeing on the wrong diagnosis while your system burns.
What to Hire and Train for in the Age of AI Coding
If you’re running a tech team or hiring developers right now, the ground is moving under your feet. The work isn’t disappearing—it’s shifting from execution to supervision and architecture.
Here’s what to prioritize.
1. Architectural Thinking Over Raw Coding Speed
Don’t optimize for “can write code fast.” AI already does that.
Instead, look for people who can:
• Understand and design system architecture
• See how a new feature fits into the whole
• Anticipate failure modes and edge cases
2. Unambiguous Specs and Strong Test Design
The best developers in an AI-first team are the ones who can:
• Write specs that are not open to interpretation
• Use state machines and decision tables to remove ambiguity
• Design test suites that effectively become the product contract
When tests and specs are strong, you can safely let AI handle more of the implementation. If you want a practical starting point for working this way with Claude, see this beginner-friendly Claude tutorial that goes from first prompt to Claude Code.
3. Debugging Systems You Didn’t Write
In a world where agents generate more and more of the code, your team must be comfortable debugging systems they didn’t personally implement.
That means being able to:
• Read unfamiliar code quickly
• Infer intent from behavior, logs, and tests
• Track down issues across services and layers
This is especially critical during incidents, when nobody has time to “get familiar” with the codebase first.
The Hidden Risk: Becoming Strangers to Your Own Codebase
There’s one more long-term risk that leaders need to watch: if AI writes most of the code and humans stop reading it, your team can slowly become strangers in their own system.
Historically, code review wasn’t just about catching bugs. It was how developers learned:
• The architecture
• The patterns and conventions
• The trade-offs behind past decisions
If agents quietly generate large chunks of logic and nobody really reviews it deeply, then when something breaks at 3 a.m., your team is reverse-engineering machine-written code under pressure.
One way to counter this is to force AI to surface its architectural decisions before it writes the full implementation. For example:
• Ask the agent to outline its planned architecture, data flows, and key decisions
• Have senior engineers review and discuss those decisions
• Only then let the agent generate the detailed code
This creates a human–AI symbiosis where people stay in control of the big decisions and understand how the system is evolving, even if they’re not hand-writing every function.
The Teams That Win Will Retrain, Not Panic
AI coding tools have shifted the bottleneck from typing speed to thinking clarity. Seniors are overloaded with reviews and decisions. Juniors are suddenly productive with AI as a partner. Mid-levels are being forced to rethink how they work.
The companies that win won’t be the ones that ignore AI or blindly automate everything. They’ll be the ones that:
• Retrain developers to think in specs, tests, and architecture
• Capture institutional knowledge before it disappears
• Build processes where humans supervise and guide AI, not the other way around
AI isn’t replacing software engineering—it’s changing what software engineering is. The sooner your team adapts to that reality, the better positioned you’ll be for what comes next.
Comments
No comments yet. Be the first to share your thoughts!