If you’re trying to go from OpenAPI to MCP server, the “hello world” part is easy. The production part is where teams get burned: duplicate side effects, unbounded pagination, vague tool outputs, and auth that’s either too permissive—or impossible to rotate.
This post is a copy/paste checklist for turning an OpenAPI spec into agent-ready tools that can run inside real workflows (Claude/Cursor/Codex today, whatever comes next tomorrow).
Why “agent-ready API” is different from “API that works"
A normal API client is written by a developer who:
- reads docs,
- understands edge cases,
- and can manually recover from partial failures.
An agent is different. It will:
- call your tools at 2am,
- retry when it sees timeouts,
- and confidently continue with whatever output you gave it.
Two failure modes show up fast:
- Ambiguous outputs (the agent can’t tell success vs failure)
- Non-idempotent side effects (retries create duplicates)
nNode’s bias is “white-box automation”: each step is inspectable, artifacts are explicit, and workflows can resume from checkpoints. That only works if your MCP tools are boringly predictable.
Start with your OpenAPI spec: 3 edits that pay off immediately
1) Make operationId tool-friendly
Most generators map operationId → tool name. Treat it like a public API.
Good rules of thumb:
verb_nounstyle:list_contacts,create_invoice,get_company- no spaces, no punctuation
- stable over time (renaming breaks clients)
paths:
/contacts:
get:
operationId: list_contacts
summary: List contacts
2) Tighten schemas for agent inputs
Agents do better with constraints than with prose.
- mark required fields
- use enums for “mode” fields
- prefer explicit booleans over “truthy” strings
components:
schemas:
CreateContactRequest:
type: object
required: [email]
properties:
email:
type: string
format: email
source:
type: string
enum: ["website", "import", "api"]
3) Standardize error shapes (don’t make agents parse free text)
Give every endpoint a consistent error object.
components:
schemas:
ApiError:
type: object
required: [code, message]
properties:
code: { type: string }
message: { type: string }
details: { type: object }
OpenAPI to MCP server tool-schema design: make tools small and explicit
When you expose an endpoint as a tool, keep it “one intent per tool.”
Good:
search_companies(query, limit)create_invoice(customer_id, line_items, idempotency_key)
Risky:
do_everything(payload)
Agents are better at composing multiple small tools than guessing what a mega-tool expects.
Prefer explicit parameters over implicit context
If a tool “implicitly” uses the current user, current workspace, or hidden defaults, you’ll get hard-to-debug behavior.
Instead, include the identity you’re acting on:
account_idworkspace_idproject_id
It’s slightly more verbose—but dramatically more reliable.
Auth & secrets: practical patterns (read vs write)
Production MCP servers are usually used by multiple clients (Claude Desktop, Cursor, IDE agents, CI bots). Design auth so you can:
- scope access,
- rotate credentials,
- and audit actions.
Checklist:
- Create separate credentials for read tools vs write tools.
- Put “dangerous write tools” behind an extra gate (approval, allowlist, or separate server).
- Use short-lived tokens where possible.
- Log
actor,token_id,request_id, andtool_namefor every call.
If you support OAuth, great—just don’t assume you’ll always have it. Many clients still rely on bearer tokens in config.
Pagination + rate limits: make tools boring
Agents love “list all X” until they accidentally fetch 200k rows.
Pick a policy: page tools vs “list_all” tools
Option A (safer default): expose only page-based tools.
// list_contacts_page
{
"cursor": "string | null",
"limit": 100
}
Option B (convenient): also expose a list_all_contacts tool, but enforce:
- server-side max pages
- server-side max items
- a clear truncation signal in the output
Rate limits + retries
You need consistent rules for retries or the agent will invent its own:
- retry on timeouts / 429 / 5xx with exponential backoff
- do not retry on validation errors (4xx) or known “duplicate” errors
Idempotency: the founder’s insurance policy
If a tool creates side effects (send email, charge card, create CRM record), retries must be safe.
When you need idempotency keys
Use them for any tool that can cause:
- money movement
- outbound messaging
- irreversible writes
Where idempotency lives
Two common patterns:
- Header-based (preferred when your upstream supports it)
- Request field (
idempotency_key) that you store server-side
Example header-based pattern:
POST /invoices
Idempotency-Key: 7b1c2d3e-...
Content-Type: application/json
{ "customer_id": "cus_123", "line_items": [ ... ] }
Upsert beats “create” for many workflows
Agents often don’t know if something already exists.
Instead of create_contact, consider an upsert_contact_by_email tool:
- fewer duplicates
- fewer “search then create” race conditions
- simpler downstream logic
Debuggable tool outputs: return JSON first, human text later
Most MCP failures aren’t “the API broke.” They’re “the tool returned something the agent misread.”
Normalize success and error into one shape
Here’s a simple output contract you can apply to every tool:
{
"ok": true,
"tool": "create_invoice",
"request_id": "req_...",
"request_summary": {
"customer_id": "cus_123",
"idempotency_key": "7b1c..."
},
"data": {
"invoice_id": "inv_456",
"status": "draft"
},
"warnings": []
}
And on failure:
{
"ok": false,
"tool": "create_invoice",
"request_id": "req_...",
"error": {
"code": "RATE_LIMITED",
"message": "Too many requests",
"retry_after_seconds": 10
}
}
Key idea: keep outputs machine-first. If you want a nice explanation for a human, do it in a separate agent step.
This maps naturally to nNode-style workflows, where every step produces a named artifact (e.g., CREATE_INVOICE_RESULT) that you can inspect, diff, and re-run from a checkpoint.
The 15-minute production checklist (copy/paste)
Use this as a pre-launch gate for any “OpenAPI to MCP server” integration:
- Operation IDs are stable and tool-friendly
- Request schemas have required fields + enums
- Error shape is consistent across endpoints
- Auth supports rotation; read/write are separated
- Write tools require idempotency keys or upsert semantics
- Pagination policy is explicit; hard limits exist
- Retry policy is explicit; non-retriable errors are labeled
- Tool outputs return normalized JSON with
ok,request_id, andrequest_summary - Audit logging records
actor,tool,request_id, and outcomes - (Optional) Dry-run mode exists for destructive actions
Where nNode fits (if you’re building more than a demo)
Once your MCP server is production-safe, the real leverage comes from turning “tool calls” into a repeatable, resumable workflow:
- one agent researches,
- one agent calls tools,
- one agent verifies results,
- one agent drafts the final output,
- and a human approves only where it matters.
That’s the core nNode idea: you don’t scale by hoping a single agent never makes mistakes—you scale by building inspectable steps that you can debug and rerun.
If you’re building agent automations on top of APIs and you want them to be operable (not just impressive), take a look at nnode.ai.