Turn Any REST API into MCP Tools in 5 Minutes
Auto-generate type-safe MCP tools from any OpenAPI 3.0 spec. One command, zero boilerplate. Connect your REST API to Claude, Cursor, Windsurf, and any MCP client. Open source.
On this page (25)
Your REST API has 200 endpoints. Your AI agent can call exactly zero of them.
That is the gap MCP was built to close. The Model Context Protocol gives AI models a universal interface for calling external tools โ like USB-C for software. But turning your API into MCP tools means writing a tool definition for every single endpoint: input schemas, output parsing, error handling, authentication, and description text that an LLM can actually understand.
For a 50-endpoint API, that is 1,500+ lines of hand-written boilerplate. For a 200-endpoint API, it is a full-time job.
@taskade/mcp-openapi-codegen eliminates that job entirely. Point it at any OpenAPI 3.0 spec โ a URL or a local file โ and it generates the complete set of type-safe MCP tools in seconds. One command. Zero hand-written definitions. The generated TypeScript is human-readable, Prettier-formatted, and ready to commit.
This is how Taskade auto-generates 50+ MCP tools from a single spec. Zero manual maintenance. When the API changes, the tools update automatically.
This guide walks through the full process โ from installation to production deployment โ and compares every major approach to OpenAPI-to-MCP conversion in 2026.
What Is MCP?
The Model Context Protocol is an open standard introduced by Anthropic in November 2024. It defines how AI models discover and call external tools through a structured JSON-RPC interface.
Think of it this way: before MCP, every AI integration was a custom adapter. Connecting Claude to your database meant writing one integration. Connecting it to your CRM meant writing another. Each tool had its own schema, its own error format, its own authentication flow. The result was a fragmented ecosystem where AI agents could not interoperate with each other's tools.
MCP fixes this with a single protocol:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ MCP: THE UNIVERSAL TOOL PROTOCOL โ
โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โ โ Claude โโโโโโถโ MCP Server โโโโโโถโ Your API โ โ
โ โ Cursor โโโโโโโ (Tools) โโโโโโโ (REST/gRPC) โ โ
โ โ Windsurf โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โ โ VS Code โ โ
โ โโโโโโโโโโโโ โ
โ โ
โ One protocol. Any client. Any server. Any tool. โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The adoption numbers speak for themselves:
- 97M+ MCP SDK downloads per month
- 10,000+ MCP servers in the ecosystem
- Backed by Anthropic, OpenAI, Google, Microsoft
- Supported in Claude Desktop, Cursor, Windsurf, VS Code, Continue.dev, n8n
MCP is not a proposal. It is the standard. And if your API does not have an MCP server, your API is invisible to the fastest-growing class of software users: AI agents.
The Problem: Manual MCP Tool Creation
Here is what a single hand-written MCP tool definition looks like:
server.tool(
"createProject",
"Create a new project in a workspace",
{
workspaceId: z.string().describe("The workspace ID"),
name: z.string().describe("Project name"),
description: z.string().optional().describe("Project description"),
color: z.string().optional().describe("Color hex code"),
templateId: z.string().optional().describe("Template ID to clone"),
},
async ({ workspaceId, name, description, color, templateId }) => {
const response = await fetch(
`https://api.example.com/workspaces/${workspaceId}/projects`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify({ name, description, color, templateId }),
}
);
const data = await response.json();
return { content: [{ type: "text", text: JSON.stringify(data) }] };
}
);
That is 28 lines for a single endpoint. A typical SaaS API has 50-200 endpoints.
| Endpoints | Lines of Boilerplate | Time to Write | Time to Maintain |
|---|---|---|---|
| 10 | ~280 | 2-3 hours | 1 hour/month |
| 50 | ~1,400 | 1-2 days | 4 hours/month |
| 200 | ~5,600 | 1-2 weeks | 2 days/month |
And the boilerplate is the easy part. The hard part is keeping tool descriptions accurate enough for an LLM to use correctly, handling every authentication flow, normalizing complex response types, and updating everything when your API spec changes.
This is a code generation problem. You already have the API specification. The tool definitions should be derived automatically.
The Solution: @taskade/mcp-openapi-codegen
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ THE CODEGEN PIPELINE โ
โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ OpenAPI โโโโถโ Parse โโโโถโ Generate โโโโถโ .ts โ โ
โ โ 3.0 Spec โ โ + Deref โ โ Zod + โ โ Output โ โ
โ โ (YAML/ โ โ + Flat โ โ Tool โ โ (commit- โ โ
โ โ JSON) โ โ โ โ Defs โ โ table) โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ
โ Input: Your existing API spec โ
โ Output: Type-safe MCP tools (TypeScript + Zod) โ
โ Time: < 10 seconds โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
@taskade/mcp-openapi-codegen reads your OpenAPI 3.0 specification, resolves $ref references, flattens nested schemas into self-contained $defs, and generates a complete TypeScript file with:
- Zod schemas for every input parameter (type-safe validation)
- Tool definitions with LLM-friendly descriptions derived from your spec
- Path parameter handling (e.g.,
/projects/{projectId}) - Query parameter handling with correct types and defaults
- Request body parsing for POST/PUT/PATCH operations
- Prettier formatting โ the output is clean, readable, and ready to commit
The key difference from runtime proxies (which translate API calls on the fly) is that codegen produces actual source code you own. You can inspect every generated tool, customize descriptions, and version-control changes alongside your codebase.
Step-by-Step Tutorial
Step 1: Install
npm install @taskade/mcp-openapi-codegen
# or
yarn add @taskade/mcp-openapi-codegen
Step 2: Create Your Codegen Script
Create a file called generate-mcp-tools.ts:
import { generateMcpTools } from "@taskade/mcp-openapi-codegen";
import * as path from "path";
async function main() {
await generateMcpTools({
// Point to your OpenAPI spec (URL or local file)
specPath: "https://api.example.com/openapi.yaml",
// Where to write the generated tools
outputPath: path.resolve(__dirname, "tools.generated.ts"),
});
console.log("MCP tools generated successfully!");
}
main();
Step 3: Run the Generator
npx tsx generate-mcp-tools.ts
The generator reads your spec, resolves all references, and writes a complete tools.generated.ts file. Every endpoint in your spec becomes a type-safe MCP tool definition.
Step 4: Create Your MCP Server
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { registerTools } from "./tools.generated.js";
const server = new McpServer({
name: "my-api-mcp-server",
version: "1.0.0",
});
// Register all generated tools with a custom fetch function
registerTools(server, {
baseUrl: "https://api.example.com",
fetchFn: async (url, init) => {
return fetch(url, {
...init,
headers: {
...init?.headers,
Authorization: Bearer ${process.env.API_KEY},
},
});
},
});
// Start the server
const transport = new StdioServerTransport();
server.connect(transport);
Step 5: Connect to Claude Desktop
Add this to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"my-api": {
"command": "npx",
"args": ["tsx", "/path/to/your/server.ts"],
"env": {
"API_KEY": "your-api-key-here"
}
}
}
}
Step 6: Test It
Restart Claude Desktop. Open a new conversation and ask:
"List all my projects and create a new one called Q1 Planning."
Claude will discover your MCP tools, call the correct endpoints, parse the responses, and execute multi-step workflows โ all without you writing a single tool definition by hand.
The same configuration works with Cursor, Windsurf, Continue.dev, and any other MCP-compatible client.
Advanced Features
Selective Tool Exposure (Allowlists)
A 200-endpoint API should not expose 200 tools to an LLM. Context window overflow causes tool confusion and degraded performance. Research shows that selective curation dramatically improves agent accuracy.
Filter endpoints with an allowlist:
await generateMcpTools({
specPath: "https://api.example.com/openapi.yaml",
outputPath: "./tools.generated.ts",
// Only expose these operations
allowOperations: [
"listProjects",
"createProject",
"getProjectById",
"listTasks",
"createTask",
"updateTask",
],
});
Or use a predicate function for dynamic filtering:
await generateMcpTools({
specPath: "https://api.example.com/openapi.yaml",
outputPath: "./tools.generated.ts",
allowOperations: (operationId, method, path) => {
// Exclude all DELETE operations
if (method === "delete") return false;
// Exclude admin-only endpoints
if (path.startsWith("/admin")) return false;
return true;
},
});
Response Normalizers
This is the feature that makes @taskade/mcp-openapi-codegen unique. Response normalizers let you inject contextual information into API responses before the LLM sees them. This transforms raw JSON data into actionable intelligence.
registerTools(server, {
baseUrl: "https://api.example.com",
responseNormalizers: {
listProjects: (data) => {
// Add clickable URLs so Claude can link to real resources
return data.items.map((project) => ({
...project,
url: `https://app.example.com/projects/${project.id}`,
hint: "You can open this project in the browser.",
}));
},
},
});
Without normalizers, Claude returns: "Project ID: abc123". With normalizers, Claude returns: "Here's your project: [Q1 Planning](https://app.example.com/projects/abc123)". The difference is the gap between information and action.
Custom Authentication
Inject any auth strategy โ API keys, OAuth2 tokens, session cookies, or custom headers:
registerTools(server, {
baseUrl: "https://api.example.com",
fetchFn: async (url, init) => {
const token = await getOAuth2Token(); // Your auth logic
return fetch(url, {
...init,
headers: {
...init?.headers,
Authorization: `Bearer ${token}`,
"X-Custom-Header": "value",
},
});
},
});
Monorepo Support
For large projects, split the codegen and server into separate packages. Taskade's own MCP repository uses this pattern:
packages/
โโโ openapi-codegen/ # The generator library (npm: @taskade/mcp-openapi-codegen)
โโโ taskade-mcp-tools/ # Generated tools for Taskade's API
โโโ mcp-server/ # The MCP server (npm: @taskade/mcp-server)
Each package has a clear responsibility. When the API spec changes, you re-run codegen in taskade-mcp-tools/ and the server package picks up the new tools automatically.
Real-World: How Taskade Uses This in Production
Taskade does not just publish @taskade/mcp-openapi-codegen. We use it every day to power our own MCP server โ @taskade/mcp-server โ which exposes 50+ tools to Claude, Cursor, and every MCP-compatible client.
Here is the actual pipeline:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ TASKADE MCP PIPELINE (PRODUCTION) โ
โ โ
โ 1. Fetch latest spec โ
โ $ yarn fetch:openapi โ
โ โ Downloads taskade.com/api/documentation/yaml โ
โ โ
โ 2. Generate tools โ
โ $ yarn generate:taskade-mcp-tools โ
โ โ Parses spec โ allowlist filter โ codegen โ
โ โ Output: tools.generated.ts (50+ tools) โ
โ โ
โ 3. Commit + publish โ
โ โ tools.generated.ts โ @taskade/mcp-server on npm โ
โ โ Users install: npx @taskade/mcp-server โ
โ โ
โ API changes? Re-run steps 1-3. Zero manual updates. โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What the 50+ tools cover:
| Domain | Tools | Examples |
|---|---|---|
| Workspaces | 3 | List, navigate, create projects |
| Projects | 11 | CRUD, copy, share, complete, get blocks/tasks |
| Tasks | 19 | Full lifecycle: assignees, dates, notes, custom fields |
| AI Agents | 15 | Generate, create, publish, manage knowledge, conversations |
| Templates | 2 | List and instantiate |
| Media | 3 | List, get, delete |
Response normalizers in action: When Claude lists your projects, the normalizer injects direct URLs so Claude can say "Here is your project: Q1 Planning" instead of dumping raw JSON. When an agent is created, the normalizer adds the public embed URL. Every response becomes actionable.
The result: Taskade users can manage their entire workspace through natural language in Claude Desktop or Cursor โ creating projects, assigning tasks, training agents, and triggering automations โ all through MCP tools that were auto-generated from a single OpenAPI spec.
Competitive Landscape: OpenAPI-to-MCP Tools in 2026
The MCP ecosystem is growing fast. Here is how every major approach to OpenAPI-to-MCP conversion compares:
| Tool | Approach | Language | Auth | Selective Exposure | Response Customization | Production Use |
|---|---|---|---|---|---|---|
| @taskade/mcp-openapi-codegen | Codegen (source output) | TypeScript | Custom fetch | Allowlist + predicate | Response normalizers | 50+ tools at Taskade |
| openapi-mcp-generator | Codegen | TypeScript | Basic | x-mcp extension flags | No | Community |
| openapi-mcp-server (AWS) | Runtime proxy | TypeScript | API key, Bearer | allowedOperations array | No | AWS-backed |
| FastMCP | Runtime (Python) | Python | Built-in | Route-level | Custom response types | Python ecosystem |
| Speakeasy | Managed codegen | Multi-lang | OAuth2 + custom | x-speakeasy-mcp | Managed transforms | Enterprise |
| Stainless | SDK-integrated | TypeScript | Full OAuth2 | SDK-level | Sandboxed execution | Enterprise |
When to use each:
- @taskade/mcp-openapi-codegen โ You want inspectable, committable TypeScript with full customization. You have a Node.js codebase and want zero vendor lock-in. You need response normalizers.
- openapi-mcp-generator โ Quick start for simple APIs. Good community momentum (500+ stars).
- AWS openapi-mcp-server โ You trust AWS, need dynamic tool creation at runtime without a build step.
- FastMCP โ You are in the Python ecosystem and want a Pythonic developer experience.
- Speakeasy / Stainless โ Enterprise teams with complex APIs, budget for managed services, and need OAuth2 flows handled automatically.
Production Hardening
Generating tools is step one. Running them safely in production requires attention to security, reliability, and observability.
Security
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ MCP SECURITY CHECKLIST โ
โ โ
โ โ Scope tokens to minimum required permissions โ
โ โ Use allowlists โ never expose all endpoints โ
โ โ Validate inputs with Zod schemas (built-in) โ
โ โ Sanitize responses to prevent data leakage โ
โ โ Rate limit tool calls at the server level โ
โ โ Log every tool invocation for audit trails โ
โ โ Never embed secrets in generated code โ
โ โ Use environment variables for all credentials โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
MCP servers run with the privileges of the API tokens they hold. In January 2026, security researchers disclosed critical vulnerabilities in several popular MCP servers โ including prompt injection paths and SSRF vectors. The lesson: treat your MCP server as a production service, not a toy.
The generated Zod schemas from @taskade/mcp-openapi-codegen provide input validation out of the box. But you should also: (1) scope your API tokens to the minimum required permissions, (2) use allowlists to limit which endpoints are exposed, and (3) sanitize any response data that could contain sensitive information.
Error Handling
Wrap your fetchFn with retry logic and structured error responses:
fetchFn: async (url, init) => {
const response = await fetch(url, { ...init, signal: AbortSignal.timeout(10000) });
if (!response.ok) {
const error = await response.text();
throw new Error(`API error ${response.status}: ${error}`);
}
return response;
}
Monitoring
Log every tool call with structured metadata โ operation name, latency, status code, and input parameters (minus sensitive values). This gives you observability into how AI agents use your API and which tools see the most traffic.
What's Next: MCP Ecosystem in 2026
The MCP ecosystem is evolving rapidly:
- Streamable HTTP transport is replacing SSE as the preferred remote transport, enabling stateless MCP servers that scale horizontally.
- The OpenAPI Initiative's Moonwalk SIG is working on making API specifications natively "agent-ready" โ richer descriptions, semantic annotations, and tool-level metadata baked into the spec itself.
- Remote MCP servers (hosted, authenticated, shared across clients) are becoming the deployment model for production use cases, replacing local stdio servers.
- Agentic AI Foundation (under the Linux Foundation) now governs the MCP specification, ensuring vendor-neutral evolution.
The direction is clear: every API will need an MCP interface. The question is whether you write 5,600 lines of boilerplate by hand or let a codegen tool do it in 10 seconds.
Quick Start Checklist
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ GET STARTED IN 5 MINUTES โ
โ โ
โ 1. $ npm install @taskade/mcp-openapi-codegen โ
โ 2. Point specPath at your OpenAPI spec โ
โ 3. $ npx tsx generate-mcp-tools.ts โ
โ 4. Create MCP server with registerTools() โ
โ 5. Add to Claude Desktop / Cursor config โ
โ 6. Start talking to your API โ
โ โ
โ GitHub: github.com/taskade/mcp โ
โ npm: @taskade/mcp-openapi-codegen โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Frequently Asked Questions
What is MCP and why does it matter for REST APIs?
MCP (Model Context Protocol) is an open standard that lets AI models call external tools through a structured interface. It was introduced by Anthropic and is now backed by OpenAI, Google, and Microsoft, with 97M+ SDK downloads per month. MCP matters because it gives AI agents like Claude, Cursor, and Windsurf a universal way to interact with any service. Without an MCP server, your API is invisible to the fastest-growing class of software users.
How is @taskade/mcp-openapi-codegen different from runtime proxies?
Runtime proxies (like openapi-mcp-server) translate API calls on the fly without generating source code. Codegen produces actual TypeScript files you can inspect, modify, test, and version-control. This means you see exactly what each tool does, you can customize descriptions for better LLM performance, and you can review changes in code review. Codegen also enables unique features like response normalizers that are impossible in a pure proxy architecture.
Can I use this with APIs I do not own?
What if my API does not have an OpenAPI spec?
You can write one. Tools like Swagger Editor make it straightforward to describe existing endpoints. Alternatively, several tools can auto-generate OpenAPI specs from existing code โ including Express, FastAPI, NestJS, and Spring Boot annotations.
How does Taskade use this?
Taskade uses @taskade/mcp-openapi-codegen to auto-generate 50+ MCP tools from our public API spec. The generated tools power @taskade/mcp-server, which lets users manage their entire Taskade workspace โ projects, tasks, AI agents, and automations โ through Claude Desktop, Cursor, or any MCP client.
Related Reading
Taskade Developer Resources:
- GitHub: taskade/mcp โ Source code, examples, and documentation
- npm: @taskade/mcp-openapi-codegen โ The codegen package
- npm: @taskade/mcp-server โ Taskade's production MCP server
- Taskade API Documentation โ The API spec this codegen powers
MCP Ecosystem:
- Model Context Protocol โ Official Site
- MCP Specification
- awesome-mcp-servers โ 30K+ star community directory
From the Taskade Blog:
- How to Build Your First AI Agent in 60 Seconds โ Getting started with AI agents
- Best AI Agent Builders in 2026 โ Comparison of agent platforms
- How to Build AI Agents Faster โ Speed up agent development
- Best Vibe Coding Tools 2026 โ Compare AI development tools
- The Ultimate Guide to Taskade Genesis โ Complete platform reference
- What Is Vibe Coding? โ The new era of building software with natural language
Explore Taskade:
- AI Agent Builder โ Create intelligent AI teammates with custom tools
- AI App Builder โ Build complete apps from one prompt
- Automation Workflows โ Connect 100+ integrations
- Community Gallery โ 130,000+ apps to clone
