My AI development stack in 2026: what I use and why

· 5 min read · Read in Español
Share:

TL;DR

  • IDE: Cursor (main) + Copilot (autocomplete)
  • LLM for queries: Claude for code, ChatGPT for explanations
  • Automation: n8n self-hosted + Python scripts
  • Documentation: Claude Projects for persistent context
  • Principle: AI doesn’t replace knowing how to code, it amplifies it

Every week someone asks me what AI tools I use for programming. Instead of repeating myself, I’m putting it here. No sponsorships, no affiliate links, just what I actually use.

Philosophy before tools

Before listing apps, something important: AI doesn’t make you a better programmer if you don’t know how to program.

I’ve seen people copy ChatGPT code without understanding it. It works until it doesn’t, and then they don’t even know where to start debugging.

My approach: I use AI to speed up what I already know how to do, not to do what I don’t know. When AI suggests something I don’t understand, I stop and learn. That’s the moment to grow.

That said, here’s the stack.

IDE: Cursor + Copilot

Yes, I use both. It’s not redundant.

Cursor is my main IDE. I use it for:

  • Composer: changes that touch multiple files
  • Contextual chat: questions about my specific codebase
  • Refactoring: “make this function more readable”

Copilot I have active for autocomplete. It’s faster than Cursor’s and less intrusive. I type, it suggests, Tab, I continue.

Why not just Cursor? Copilot’s autocomplete is better. Why not just Copilot? It doesn’t have Composer or real agent mode.

If I had to choose one: Cursor. But $30/month for both pays off in productivity.

I have a detailed comparison of Cursor vs Windsurf vs Copilot if you want to dig deeper.

LLMs: Claude for code, ChatGPT for the rest

I don’t use the same model for everything.

Claude (Opus 4.5/Sonnet 4.5) is my preference for code because:

  • Fewer hallucinations on technical topics
  • Better at following complex instructions
  • Long context (200k tokens) allows fitting entire projects

I use it for:

  • Debugging weird errors
  • Designing architecture (“how would you structure this?”)
  • Reviewing my code (“do you see anything that could fail?”)

ChatGPT I use for:

  • Conceptual explanations (“explain how OAuth works”)
  • Brainstorming (“give me 5 ways to solve this problem”)
  • Non-code stuff (emails, texts, etc.)

DeepSeek I have for when I need something free and local. It runs on my machine with Ollama. Not as good as Claude but for simple tasks it works and I pay nothing.

I’m not a fanboy of any model. I use whichever works best for each task.

Claude Projects: persistent context

This is underrated. Claude Projects lets you create “projects” with context documents that persist between conversations.

I have a project for each client/code project with:

  • Project README
  • Folder structure
  • Architecture decisions
  • Tech stack

When I open a new conversation, Claude already knows what the project is about. I don’t have to explain everything from scratch every time.

This alone is worth $20/month for Claude Pro. If you code professionally, it pays for itself.

Automation: n8n + Python

For repetitive workflows I use n8n (self-hosted on my server).

Examples of what I automate:

  • When I merge to main, it auto-deploys
  • When I create a Linear issue, a git branch is created
  • Daily summary of error logs to Slack

n8n is like Zapier but open source and self-hosted. Total control, no limits, no monthly costs.

For more specific things, Python scripts. Not everything needs a platform. Sometimes a 50-line script running with cron is enough.

Documentation: Markdown + AI

I don’t use Notion or fancy tools. Markdown in the repo.

But I do use AI for:

  • Generating initial docs: “document this function”
  • Improving what I write: “make this clearer”
  • READMEs: first draft with AI, then I edit

AI-generated documentation needs review. But it’s better than starting from scratch.

What I DON’T use (and why)

Devin, autonomous code agents: They still don’t work well for real projects. Lots of hype, little practical utility.

GitHub Copilot Workspace: Promising but immature. Maybe in 6 months.

“All-in-one” tools: I prefer combining specialized tools over using one that does everything poorly.

Typical workflow

When I start a new feature:

  1. Design in Claude: “I need to implement X. Stack is Y. How would you structure it?”
  2. Scaffold with Cursor Composer: “Create the base files for this feature”
  3. Implementation: I write with Copilot autocomplete
  4. Review with Claude: “Review this code, do you see issues?”
  5. Tests with Cursor: “Generate tests for this function”
  6. Documentation: “Document the new endpoints”

It’s not linear. Sometimes I go back. But AI is in every step, saving time.

How much it actually saves

It’s hard to measure, but my estimate:

TaskWithout AIWith AISavings
New project scaffold2h20min80%
Debugging weird error1h+15min75%
Writing tests1h20min70%
Documentation30min10min65%
Normal code1h45min25%

“Normal” code is where I save least because I already know what I want to write. Where I save most is on tasks I hate: tests, documentation, debugging.

The cost

ToolCost/month
Cursor Pro$20
Copilot$10
Claude Pro$20
ChatGPT Plus$20
n8n$0 (self-hosted)
Total$70/mo

Is it a lot? Depends. If you bill $50/hour and AI saves you 5 hours a month, it’s already paid for itself. In my case, the ROI is clear.

Final advice

Don’t copy my stack. Copy the process:

  1. Try tools with free trials
  2. Measure if it actually saves time
  3. Keep what works for you
  4. Review every 6 months because this changes fast

What I use today isn’t what I used a year ago. And it probably won’t be what I use next year. The perfect stack doesn’t exist, there’s only the stack that works for you now.


What’s your AI development stack? Is there something I use that surprises you or something you think I’m missing?

Found this useful? Share it

Share:

You might also like