The Vibe Coding Reckoning
The numbers are insane
Andrej Karpathy told us to “fully give in to the vibes, embrace exponentials, forget that the code even exists.” And we listened.
Collins Dictionary named “vibe coding” its Word of the Year for 2025. By early 2026, the numbers speak for themselves:
- Cursor hit $2B in annual recurring revenue — in 14 months
- Lovable crossed $200M ARR
- The AI app builder market sits at $4.7B, projected to triple by next year
- 41% of all code committed globally is now AI-generated
- 92% of US-based developers use AI coding tools every single day
This isn’t a trend. It’s a phase transition. The money has spoken, and it’s screaming.
Then the backlash hit
And then, in January and February 2026, something cracked.
Stack Overflow fired the first shot: “A new worst coder has entered the chat” — a direct hit at developers shipping code they can’t understand.
Hackaday piled on with “How Vibe Coding Is Killing Open Source”, pointing at a flood of low-quality AI-generated pull requests drowning maintainers.
Red Hat published “The Uncomfortable Truth About Vibe Coding”. The New Stack warned about “catastrophic explosions”.
Four major publications. Same month. Same message.
And the data backs them up. A Georgetown CSET study found that 45% of AI-generated code contains real, exploitable security flaws. A December 2025 analysis showed AI-co-authored code has 1.7x more major issues than human-written code. AWS put it bluntly: review capacity, not developer output, is now the bottleneck.
There’s even a new job title on LinkedIn now: “Recovery Engineer” — specialists hired to refactor and un-slop projects built entirely by prompting.
Both sides are wrong
Here’s where most people stop thinking.
The boosters — Guillermo Rauch saying Vercel’s v0 expands the developer pool from 5 million to 100 million — ignore something fundamental. Most of those 95 million new “developers” can’t debug what they shipped.
They can generate a landing page, but they can’t trace a race condition. They can prompt their way to an MVP, but they can’t handle the 3am incident when production breaks.
The doomers — ThePrimeagen insisting that “if you can’t read the code, you can’t lead the project” — ignore something equally fundamental. The best engineers are demonstrably, measurably more productive with AI. The data isn’t ambiguous. Senior developers who leverage AI tools are shipping at a pace that was physically impossible two years ago.
Both camps miss the actual shift.
The x100 developer isn’t someone who can’t code. It’s someone who can code AND leverage AI to ship at 100x the pace.
The danger isn’t AI — it’s confusing speed with competence. Vibe coding doesn’t make you a developer any more than a calculator makes you a mathematician. But refusing to use a calculator doesn’t make you smarter either.
Where the line actually is
The line isn’t “AI vs no-AI.” It’s stakes.
Vibe coding for prototypes? Ship it. MVPs? Go for it. Internal tools that three people will use? Absolutely. This is where AI-driven development genuinely shines — you can validate an idea in hours instead of weeks.
Vibe coding for production systems handling user data, financial transactions, medical records? You need to read every line. You need to understand the failure modes. You need to know what happens when the database connection drops, when the token expires, when the user sends a payload you didn’t expect.
Theo from t3.gg is right that “Product Engineer” is the new title. But that engineer still needs to understand what shipped. The title change isn’t a demotion of technical skill — it’s an expansion of scope.
You’re responsible for the whole thing now. And “I didn’t write that part, the AI did” isn’t an excuse when it breaks.
The real skill shift
Here’s what nobody’s talking about enough.
The shift isn’t from “writing code” to “not writing code.” It’s from writing code to reviewing code. From prompt engineering to context engineering.
The developers who will win are the ones who structure their codebases for AI agents — who build MCP servers that give agents structured access to their tools, who maintain CLAUDE.md files that encode project conventions, who use shadcn/skills to give AI the right context for their stack.
The ones who just paste requirements into ChatGPT and pray? They’ll get left behind.
And the junior developer trap is real. Entry-level component work has largely vanished. But the answer isn’t to gatekeep AI — it’s to redefine what “junior” means.
A junior developer in 2026 isn’t someone who writes basic CRUD components. It’s someone who’s learning to review AI output, understand system architecture, and structure context for agents. Different skill set. Arguably harder. Definitely more valuable.
The vibe coding reckoning isn’t about whether AI is good or bad.
It’s about whether we’re honest about what it can and can’t do. The developers who thrive won’t be the ones who refuse AI or the ones who blindly trust it. They’ll be the ones who know exactly where the line is — and have the skills to work on both sides of it.
The x100 developer doesn’t skip the code review. They just review ten times more code.