MCP Servers: What They Are and the Best Ones to Start With
You’ve got a powerful language model. But when you ask it about the state of your database, the contents of a repository, or what’s in that PDF from last week, it either hallucinates or flat out tells you it doesn’t have access.
The problem isn’t the model. It’s the architecture.
MCP (Model Context Protocol) solves exactly that: it defines a standard for LLMs to connect to external data sources in a structured, secure way. But MCP on its own is just a protocol. What makes it actually useful are MCP Servers — the processes that expose tools, resources, and data to the model.
That’s what this post is about. What they are, how they work, which ones are worth installing first, and how to build your own.
MCP Server vs MCP Client: clearing up the confusion
Before getting into it, there’s a distinction that trips a lot of people up and is worth getting straight from the start.
An MCP Server is an independent process that exposes capabilities to the model: reading files, running SQL queries, making API calls, interacting with Git… Its job is to offer tools and resources.
An MCP Client is the application that uses the model and connects to those servers. Claude Desktop, Claude Code, Cursor, or any MCP-compatible IDE acts as a client.
The model itself doesn’t connect directly to anything. The client acts as intermediary: it receives the available tools from the server, includes them in the model’s context, and when the model decides to use one, the client executes the call and returns the result.
When people say “install an MCP server in Claude,” what they’re actually doing is configuring the client (Claude Desktop or Claude Code) to automatically launch and manage that server.
How the architecture works
The flow is simpler than it sounds:
-
The server starts. When you open Claude Desktop (or Claude Code), the client launches the configured servers as local processes. Each server implements the MCP protocol.
-
The client discovers capabilities. The client queries the server to list what tools and resources it exposes. For example, the filesystem server might expose
read_file,write_file,list_directory. -
The model receives the catalog. Those tools reach the model as part of its context, just like any other information.
-
The model decides when to use them. If the user asks “what files are in my projects folder?”, the model generates a call to
list_directory. The client executes it on the server and returns the result to the model. -
The model responds with real information. No guessing. It looks it up.
All of this happens locally if you’re using local servers. Data doesn’t leave to any external server just because you’re using MCP — though it does get sent to the LLM as context, which is the part you need to be clear on from a privacy standpoint.
The best MCP Servers to start with
With over 97 million installs as of March 2026 and an ecosystem growing every week, choosing can feel overwhelming. Here are the most useful ones by category, prioritizing stability and real-world use cases.
Files and code
Filesystem — Anthropic’s official server. Reads and writes files in paths you explicitly configure. Essential if you work with local projects. It has configurable access controls: you define exactly which directories it can touch.
Git — Lets the model read, search, and manipulate Git repositories. Useful for code review, understanding change history, or searching through commits. Also official from Anthropic.
Databases
PostgreSQL — One of the most widely used in development environments. Enables natural language queries against your database. Recommendation: always use a read-only user for this server, especially in production.
MongoDB — Similar to the PostgreSQL server but for MongoDB and Atlas. Native authentication and access control support. Useful if your stack is more document-oriented.
APIs and external services
GitHub — Connects the model to your repositories, issues, and pull requests. It’s the first one most developer-focused guides recommend installing. With a token scoped to minimum permissions (Contents read, Issues read, Pull requests read) you’ve got an assistant that actually knows your codebase.
Fetch — Lets the model retrieve content from URLs and convert it into an efficient format for context. Simple but surprisingly useful for research or lightweight scraping.
Zapier MCP — If you work across a lot of SaaS tools, this server connects the model to over 7,000 app actions through a single entry point. Powerful, but be careful about the permissions you grant.
Development and AI
Context7 — Solves one of the most frustrating problems in LLM-assisted development: the model using outdated documentation. Context7 exposes up-to-date docs for popular frameworks and libraries. If you use Cursor AI, this server makes a real difference.
Memory — A persistent memory system based on a knowledge graph. The model can store and retrieve information across sessions. Useful for agents that need long-term context.
Sequential Thinking — A reasoning tool that helps the model break complex problems into steps. Improves response quality on tasks that require planning.
Observability and data
dbt MCP — If your stack includes dbt, this server exposes the semantic layer and project graph. Ideal for analytics workflows where the model needs to understand data transformations.
Semgrep — For teams doing security reviews. Lets the model run code analysis with custom rules. Off the beaten path but very solid.
How to set up your first MCP Server
The easiest case is adding official servers. Here’s the process for Claude Desktop or Claude Code.
1. Open the MCP configuration file.
In Claude Desktop, the file is at ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows).
In Claude Code, use the claude mcp add command or edit .claude/mcp.json in your project.
2. Add the servers you want.
Basic structure:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/projects"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your_token_here"
}
},
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://user:password@localhost:5432/your_database"
]
}
}
}
3. Restart the client.
Servers launch when the client starts. Without a restart, it won’t pick them up.
4. Verify they’re working.
In Claude Desktop you’ll see a tools icon in the interface. In Claude Code you can run claude mcp list to see the active servers.
Prerequisite: You need Node.js 18+ installed. Official servers install automatically via npx the first time they’re used.
If you want to build your own server
The official SDK is available for both Python and TypeScript. The minimal structure of an MCP server in TypeScript is manageable: you define tools with their input schemas, implement the handlers, and start the server over stdio or SSE.
For custom use cases — connecting the model to an internal API, exposing data from a legacy system, or integrating with company-specific tooling — building your own server makes sense. The modelcontextprotocol.io docs have quickstart guides for both languages.
If your stack is Python and you have FastAPI experience, the pattern will feel familiar. If you work more in the data ecosystem, this kind of integration fits naturally into a 2026 AI development stack.
When NOT to use MCP Servers
MCP isn’t the answer to everything. Before adding a new server, it’s worth asking whether you actually need it.
Don’t use MCP when the context you need fits in the prompt. If you’re doing a one-off query on a config file, copy-pasting the content is faster and has less overhead than spinning up a server.
Don’t connect everything just because you can. Three servers is the sweet spot according to most practical references. Five can work, but the token overhead of describing available tools starts eating into your context budget. More than five and you’ll start seeing real degradation in response quality. This connects directly to the hidden costs of multi-agent systems: more tools available doesn’t always mean better output.
Watch your permissions. In January and February 2026, independent audits found that 66% of scanned MCP servers had at least one security finding, with over 30 CVEs identified. The Smithery registry (one of the main directories) and Anthropic’s own official repository are the safest starting points. For database servers, always use read-only credentials.
MCP doesn’t replace a solid RAG pipeline. If your use case is semantic search over large corpora, a dedicated RAG architecture will give you better results than connecting the entire filesystem through MCP. They’re tools for different problems.
Don’t use third-party MCP servers without reviewing the code. The protocol is open and anyone can publish a server. Unlike an npm package where the risk profile is well understood, an MCP server has privileged access to your local environment. Reviewing the code — or at least the commit history — before installing anything unofficial isn’t paranoia, it’s common sense.
If you’re building more complex workflows — say, using Claude Sonnet 4.6 as the backbone or assembling agent pipelines — MCP Servers are the piece that turns the model from a generic assistant into something that genuinely knows your environment.
The ecosystem maturity in 2026 makes the entry point much lower than it was a year ago. Anthropic’s official servers work well out of the box, the documentation has improved significantly, and the JSON configuration pattern is already familiar to all the major IDEs.
The one mistake I keep seeing repeated: installing too many servers too fast. Start with one. Understand how information flows. Then add the next one.
Keep exploring
- MCP (Model Context Protocol): what it is and why it matters — The previous post in this series: understand the protocol before the servers
- Hidden costs of multi-agent systems — Before building complex MCP pipelines, understand where the money goes
- My AI development stack in 2026 — Where MCP Servers fit inside a real working stack
You might also like
MCP (Model Context Protocol): What It Is and Why
Complete guide to MCP, the standard connecting AI agents to your systems. How it works, available servers, and practical examples.
OpenAI Bought uv and Ruff: What It Means
OpenAI acquired Astral, the team behind uv and Ruff. Here's what they actually bought, why the community is split, and what you should do.
Cursor vs Windsurf vs Claude Code: Which One to Pick in 2026
Honest comparison of the 3 biggest AI coding tools. Pricing, models, philosophy, and which one makes sense based on how you actually work.