Rapid MCP Server Edge Framework
All-in-one skill for managing build remote MCP servers with tools and OAuth. Built for Claude Code with best practices and real-world patterns.
MCP Server Edge Framework
Advanced MCP (Model Context Protocol) server development guide for building edge-deployed context servers that provide AI models with real-time data access, tool execution, and resource management.
When to Use This Skill
Choose MCP Server Edge when:
- Building MCP servers that run at the edge for low-latency AI tool integration
- Creating context providers that give Claude and other AI models access to real-time data
- Implementing resource and tool servers following the MCP specification
- Deploying MCP servers on Cloudflare Workers, Deno Deploy, or edge functions
- Building production-grade MCP servers with authentication and rate limiting
Consider alternatives when:
- Need simple tool integration — use standard MCP server templates
- Need backend API without AI integration — use standard API frameworks
- Need full application server — use traditional Node.js/Python servers
Quick Start
# Install MCP SDK npm install @modelcontextprotocol/sdk # Activate MCP edge framework claude skill activate rapid-mcp-server-edge-framework # Build MCP server claude "Build an MCP server that provides database access and file management tools"
Example: MCP Server Implementation
import { Server } from '@modelcontextprotocol/sdk/server/index.js'; import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; const server = new Server( { name: 'data-tools', version: '1.0.0' }, { capabilities: { tools: {}, resources: {} } } ); // Register tools server.setRequestHandler('tools/list', async () => ({ tools: [ { name: 'query_database', description: 'Execute a read-only SQL query against the database', inputSchema: { type: 'object', properties: { query: { type: 'string', description: 'SQL SELECT query' }, database: { type: 'string', enum: ['analytics', 'users'] }, }, required: ['query', 'database'], }, }, { name: 'list_files', description: 'List files in a directory', inputSchema: { type: 'object', properties: { path: { type: 'string', description: 'Directory path' }, pattern: { type: 'string', description: 'Glob pattern filter' }, }, required: ['path'], }, }, ], })); server.setRequestHandler('tools/call', async (request) => { const { name, arguments: args } = request.params; switch (name) { case 'query_database': { const result = await executeQuery(args.database, args.query); return { content: [{ type: 'text', text: JSON.stringify(result, null, 2) }] }; } case 'list_files': { const files = await listFiles(args.path, args.pattern); return { content: [{ type: 'text', text: files.join('\n') }] }; } default: throw new Error(`Unknown tool: ${name}`); } }); // Register resources server.setRequestHandler('resources/list', async () => ({ resources: [ { uri: 'db://analytics/schema', name: 'Analytics Database Schema', mimeType: 'application/json', }, ], })); server.setRequestHandler('resources/read', async (request) => { const { uri } = request.params; if (uri === 'db://analytics/schema') { const schema = await getDbSchema('analytics'); return { contents: [{ uri, text: JSON.stringify(schema), mimeType: 'application/json' }] }; } throw new Error(`Unknown resource: ${uri}`); }); // Start server const transport = new StdioServerTransport(); await server.connect(transport);
Core Concepts
MCP Architecture
| Component | Description | Role |
|---|---|---|
| Server | Provides tools, resources, and prompts | Data/capability provider |
| Client | Connects to servers, invokes tools | AI application (Claude, etc.) |
| Transport | Communication layer (stdio, HTTP, WebSocket) | Connection management |
| Tools | Executable actions with parameters | Functions AI can call |
| Resources | Read-only data sources | Context AI can access |
| Prompts | Pre-defined prompt templates | Reusable interaction patterns |
Tool vs Resource
| Aspect | Tool | Resource |
|---|---|---|
| Purpose | Execute actions | Provide data |
| Side Effects | May modify state | Read-only |
| Parameters | Dynamic input schema | URI-based addressing |
| Caching | Generally not cached | Can be cached |
| Examples | Run query, send email, create file | Database schema, config, docs |
Configuration
| Parameter | Description | Default |
|---|---|---|
transport | Transport type: stdio, http, sse | stdio |
auth | Authentication method: none, token, oauth | none |
rate_limit | Requests per minute per client | 60 |
timeout | Tool execution timeout (seconds) | 30 |
max_response_size | Maximum response size (bytes) | 1MB |
logging | Log level: error, warn, info, debug | info |
Best Practices
-
Make tools idempotent and safe by default — Read operations should be the default. Write operations should require explicit confirmation parameters. A tool called
query_databaseshould only accept SELECT statements, with a separateexecute_mutationtool for writes. -
Provide detailed tool descriptions with examples — The AI model uses tool descriptions to decide when and how to use each tool. Include parameter descriptions, expected formats, valid ranges, and usage examples in the tool schema.
-
Return structured data from tools — Format tool responses as structured JSON rather than free-text. This makes results parseable by the AI model and enables downstream processing. Include metadata (row count, execution time) alongside data.
-
Implement proper error handling with actionable messages — Return error messages that help the AI model self-correct: "Query failed: table 'users' not found. Available tables: customers, orders, products" is far more useful than "SQL error."
-
Add resource endpoints for context the AI needs before using tools — Expose database schemas, API documentation, and configuration as resources. This gives the AI context to construct valid tool calls without trial and error.
Common Issues
AI model calls tools with invalid parameters despite schema validation. Add detailed examples in tool descriptions. Use enum types for constrained values. Return helpful error messages that guide the model to correct its input.
MCP server becomes a bottleneck under high request volume. Implement connection pooling for database connections, add response caching for frequently requested resources, and use async/non-blocking I/O for all tool implementations.
Server crashes don't communicate errors back to the client. Wrap all tool handlers in try/catch blocks and return structured error responses rather than allowing exceptions to propagate. Use the MCP error response format with error codes and descriptive messages.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.