Skip to content

Vibe Coding: What It Is, How It Works, and Why 90% Do It Wrong

· 9 min read · Read in Español
Share:

TL;DR

  • Vibe coding = describe what you want in plain language, let AI write the code, review and repeat
  • It’s real. Claude Code’s annualized revenue already exceeds $2.5 billion. Uber, Salesforce, and Accenture use it in production
  • The problem isn’t the concept. It’s that most people use it as if AI-generated code were automatically good
  • The three most common mistakes: not reviewing what it generates, ignoring security, and confusing “it works” with “it’s well written”
  • Best tools: Cursor, Claude Code, Windsurf, GitHub Copilot — each for a different profile

Andrej Karpathy coined the term in 2025. The idea was simple: program by following your intuition, without worrying about every line of code. The AI writes, you direct.

A year later, vibe coding is no longer an experiment. It’s an industry. And like any new industry, it has plenty of hype, some real results, and quite a few problems people would rather not mention.

Let’s talk about all of it.

What is vibe coding?

At its most basic: you describe what you want in natural language, an AI tool generates the code, you review it (or don’t), and you use it.

It’s not just autocomplete. It’s a shift in the entire workflow:

Before:

  1. Think through the solution
  2. Code it manually
  3. Debug errors
  4. Iterate

With vibe coding:

  1. Describe the outcome you want
  2. AI generates the code
  3. You review whether it does what you expect
  4. Ask for adjustments in plain language
  5. Iterate

Programming knowledge is still necessary. But the bottleneck is no longer writing code — it’s knowing what to ask for and knowing how to evaluate what you get back.

As I analyzed in AI Maturity: Less Code, More Focus, natural language is becoming the new programming language. Not because programming languages are going away, but because knowing how to articulate what you need to AI is becoming more valuable than syntax.

Why it’s growing so fast

The numbers are hard to ignore.

GitHub reported 43 million monthly pull requests in 2025, up 23% year-over-year. Not because there are more developers — it’s because each developer is generating more code.

The annualized revenue of Claude Code (Anthropic’s AI development tool) already exceeds $2.5 billion. Companies like Uber, Salesforce, and Accenture use it in production, not as an experiment.

Anthropic just launched Code Review, a tool specifically designed to review all the code that vibe coding is generating. That’s not a coincidence: the pull request volume grew so much that manual reviews became a bottleneck.

Vibe coding is not a Twitter trend. It’s a real transformation in how software gets built.

The problem nobody tells you about

If vibe coding were all upside, everyone would be using it perfectly. They’re not.

There’s one central problem that keeps showing up: people confuse “AI generated it” with “it’s good”.

AI-generated code works most of the time. It passes basic tests. It does what it’s asked. And that’s exactly where the problem starts: “it works” and “it’s well written” are not the same thing.

Mistake 1: Not reviewing what it generates

The correct vibe coding workflow includes a human review. The workflow most people follow involves copying and pasting without reading.

AI-generated code can:

  • Use deprecated or outdated patterns
  • Introduce unnecessary dependencies
  • Have logic that looks correct but breaks on edge cases
  • Be entirely functional but impossible to maintain

When code works and you don’t read it, you’re accumulating technical debt at industrial speed.

Mistake 2: Ignoring security

This is the most dangerous one. AI models generate code that works — not code that’s secure by default.

Real examples of what shows up in AI-generated code without proper oversight:

  • SQL queries without parameterization (SQL injection waiting to happen)
  • Hardcoded tokens and credentials
  • Insufficient input validation
  • Permissive CORS by default
  • Dependencies with known vulnerabilities

The problem isn’t the AI. It’s that if you don’t know what to look for, you won’t see these issues even when the code “works.”

Mistake 3: Confusing speed with quality

Vibe coding is very fast for prototyping. It’s tempting to use it with the same level of oversight for production.

In a prototype, a bug is an anecdote. In production, it’s an incident. The speed you gain generating code you can lose many times over when you’re debugging a system nobody understands because “the AI generated it and it worked.”

Vibe coding vs. real coding?

This question comes up a lot. The short answer: it’s a false distinction.

Vibe coding doesn’t replace knowing how to program. It amplifies it.

An experienced developer using vibe coding can do in a day what used to take a week. An inexperienced developer using vibe coding generates fast code that nobody can maintain and that fails in production.

The most useful analogy: an expert video editor with Premiere does in hours what used to take days. Someone without visual judgment with Premiere makes videos quickly and badly. The tool isn’t the problem.

What vibe coding is genuinely changing is where experience matters. It’s less about remembering syntax or writing boilerplate. It’s now about:

  • Designing the right architecture
  • Knowing what to ask for and how to ask for it
  • Evaluating whether what was generated is actually correct
  • Identifying security and performance issues

If you understand that, vibe coding is a massive advantage. If you don’t, it’s just a faster way to create problems.

Vibe coding tools (and what each one is good for)

Cursor — For developers who want control

A VS Code fork with native AI integration. Its flagship feature is Composer: describe a change that spans multiple files and it applies them all at once.

Best for: developers who want to approve each change before applying it. It has a learning curve but gives maximum control.

Price: $20/month

Claude Code — For complex projects

Anthropic’s tool running directly in the terminal. It understands your full project context, can execute commands, read files, and make coordinated changes. The most powerful option for large codebases.

Best for: large projects where you need AI to understand the full architecture, not just individual files.

Price: $20/month (included in Claude Pro)

Windsurf — For delegating more

Codeium’s agent. Describe a task (“implement Google OAuth login”) and it handles the whole process: searches documentation, creates files, writes code. More autonomous than Cursor.

Best for: developers who prefer to describe the outcome and review at the end, not step by step.

Price: $15/month (free tier available)

GitHub Copilot — For not changing your setup

Extension for VS Code, JetBrains, Neovim. Doesn’t change your editor — enhances what you already use. Best-in-class autocomplete. The agent mode is more limited than Cursor or Windsurf.

Best for: developers with an optimized existing setup who don’t want to change it.

Price: $10/month

I have a detailed comparison of Cursor, Windsurf, and Copilot if you want to go deeper on the differences.

How to do vibe coding right

Five rules that make the difference between leveraging it and burning yourself:

1. Always read the generated code. Even if it’s long, even if it looks correct, even if it passed tests. Reading gives you context and lets you catch problems that tests don’t.

2. Ask for explanations. Before applying an AI-generated code block, ask it to explain what it does and why. If you can’t follow the explanation, don’t apply the code.

3. Explicitly review for security. Ask the AI: “are there any security issues in this code?” Not because AI always catches them, but because it forces you to think about it.

4. Use small commits. When something breaks, you want to revert specific changes. Vibe coding makes it tempting to generate a lot of code at once. Don’t.

5. Define context before asking. The more you explain to the tool about your project, its constraints, and what you want to avoid, the better the output. “Add authentication” generates generic code. “Add JWT authentication, no external libraries, compatible with the REST API we already have at /api/auth” generates something useful.

What’s coming: automated review of AI-generated code

The very growth of vibe coding is creating a new problem: there’s so much generated code that reviewing it manually doesn’t scale.

The market’s answer is to use AI to review the code that AI generates. Anthropic launched Code Review for exactly this reason: multiple agents in parallel detecting errors, prioritizing by severity, and integrating directly with GitHub. Estimated price: $15-$25 per review.

It’s ironic and logical at the same time. Vibe coding multiplied pull requests so much that it now needs its own automation layer to be manageable.

What this signals: vibe coding isn’t going away, but the tooling ecosystem around it will keep growing and specializing.

The bottom line

Vibe coding is real, it works, and it has a long future ahead. It also has real problems you can avoid if you understand them before you hit them in production.

The question isn’t whether to use it. It’s understanding that AI writes code fast — not good code by default. You provide the judgment. Without judgment, vibe coding is just a faster and more expensive way to generate technical debt.

With judgment, it’s probably the most significant productivity improvement software development has seen in the last twenty years.


Keep exploring

Found this useful? Share it

Share:

You might also like