Skip to main content

User Guide - API Docs to MCP Server

Overview

This guide walks you through using the API Docs to MCP Server platform to convert your OpenAPI specifications into live MCP servers that AI agents can consume. The platform supports both direct JWT authentication and OAuth 2.1 with PKCE for seamless integration with Claude Desktop and other AI agents.

Table of Contents

  1. Getting Started
  2. Uploading API Specifications
  3. OAuth Integration for AI Agents
  4. Managing Deployments
  5. Analytics Dashboard
  6. Monitoring and Analytics
  7. Troubleshooting

Quick Start with Claude Desktop

Step 1: Create an MCP Server

  1. Sign in to your account
  2. Go to the Dashboard
  3. Upload your OpenAPI specification (JSON or YAML)
  4. Configure authentication (if required)
  5. Test your credentials - Click “Test Connection” to validate before deploying
  6. Click “Generate MCP Server”
  7. Wait for deployment to complete (usually under 1 minute)

Step 2: Verify Credentials Work

IMPORTANT: Always test credentials before connecting to Claude:
  1. Pre-deployment testing:
    • After configuring credentials, click “Test Connection”
    • Wait for validation result
    • Only proceed if test passes
  2. Post-deployment testing:
    • Find your deployment card in dashboard
    • Click “Test Credentials” button
    • Verify credentials still work
    • Update if test fails

Step 3: Configure Claude Desktop

  1. Get Your MCP Server URL
    • After deployment, copy your MCP server URL from the dashboard
    • It will look like: https://...supabase.co/functions/v1/mcp-router/your-api-xxxxx
  2. Get OAuth Credentials
    • In your deployment dashboard, scroll to “OAuth 2.1 Client”
    • Select “Claude Desktop” (or “Both”)
    • Click “Set Up OAuth”
    • Copy the Client ID
  3. Update Claude Configuration Edit your Claude Desktop config file: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    Windows: %APPDATA%\Claude\claude_desktop_config.json
    {
      "mcpServers": {
        "your-api": {
          "url": "https://...supabase.co/functions/v1/mcp-router/your-api-xxxxx",
          "transport": "http",
          "oauth": {
            "client_id": "mcp_xxxxx-xxxxx-xxxxx",
            "authorization_url": "https://...supabase.co/functions/v1/mcp-oauth-server/authorize",
            "token_url": "https://...supabase.co/functions/v1/mcp-oauth-server/token",
            "scopes": ["openid", "email"]
          }
        }
      }
    }
    
    Note: The .well-known/mcp-server discovery endpoint is automatically detected by MCP clients - don’t include it in the base URL.
  4. Restart Claude Desktop
  5. Authorize Access
    • Claude will open an authorization page
    • Sign in with your Tydli account
    • Approve the access request
    • Claude will now have access to your MCP server tools

Step 4: Use Your API in Claude

Once configured, you can use your API naturally in conversations:
User: Get the list of users from the API

Claude: I'll use the get_users tool to fetch that for you.
[Calls your API endpoint]
Here are the users: ...

1. Account Setup

Sign Up
  1. Navigate to the application
  2. Click “Get Started” or the sign-in button
  3. Enter your email address and create a password
  4. Check your email for the verification link
  5. Click the verification link to confirm your email address
  6. Return to the application and log in
Important: Email verification is required. Unverified accounts cannot access the dashboard. Profile Management
  • Your profile is automatically created with your display name
  • Access your profile info from the header

2. Understanding the Interface

Header Section
  • Shows your email address and current login status
  • Sign out button for account management
Hero Section
  • Project overview and value proposition
  • Quick access to getting started
Upload Section
  • Three methods to import your OpenAPI specification
  • Validation and generation controls
Deployment Dashboard
  • Real-time status of all your deployments
  • Management controls and monitoring

Uploading API Specifications

Tydli supports multiple ways to get your API into an MCP server. Choose the method that works best for your documentation. Don’t have an OpenAPI spec? No problem! Our AI can analyze any API documentation and automatically generate everything you need. Supported Formats
  • 📄 PDF — Official API documentation, generated docs
  • 📝 Word (.docx) — Internal API documentation
  • 📋 Markdown (.md) — GitHub READMEs, developer docs
  • 📃 Plain Text (.txt) — Simple API references
  • 🌐 HTML (.html) — Saved web documentation
  • 📊 JSON/YAML — Partial specs, config files
How to Use
  1. Click the ”✨ Any Format (AI)” tab
  2. Choose your input method:
    • File Upload — Drag and drop or browse for files (max 5 MB)
    • Paste Text — Copy documentation from any source
    • From URL — Link to GitHub READMEs or documentation pages
  3. Click “Generate MCP Server with AI”
  4. Review the AI’s extraction (confidence score, endpoints found)
  5. Deploy your MCP server!
Daily Limits

Method 2: OpenAPI File Upload

Best for: Teams with existing OpenAPI/Swagger specifications Supported Formats
  • .json files (OpenAPI JSON format)
  • .yaml and .yml files (OpenAPI YAML format)
  • Maximum file size: 10MB
How to Upload
  1. Click the “File Upload” tab
  2. Drag and drop your file onto the upload area, or
  3. Click “Choose File” to browse your computer
  4. Selected file details will be displayed
  5. Use the X button to remove and select a different file
File Requirements
  • Must be valid JSON or YAML syntax
  • Must contain a valid OpenAPI 3.0+ specification
  • Should include required fields: info.title, info.version, paths

Method 2: Paste Content

How to Use
  1. Click the “Paste JSON/YAML” tab
  2. Copy your OpenAPI specification content
  3. Paste it into the large text area
  4. The system will auto-detect JSON vs YAML format
Content Detection
  • JSON: Content starting with { or [
  • YAML: Content not starting with JSON syntax

Method 3: URL Import

How to Use
  1. Click the “From URL” tab
  2. Enter the public URL of your OpenAPI specification
  3. Ensure the URL is publicly accessible (no authentication required)
Supported URLs
  • Direct links to .json files
  • Direct links to .yaml or .yml files
  • API documentation URLs serving raw specifications
  • Must return valid OpenAPI content
Examples
https://api.example.com/swagger.json
https://api.example.com/openapi.yaml
https://raw.githubusercontent.com/user/repo/main/api.yaml

Validation Process

Running Validation

  1. After selecting your input method and providing the specification
  2. Click “Validate Spec” button
  3. Wait for validation to complete (usually 2-5 seconds)

Validation Results

Success Indicators
  • ✅ Green checkmark with “Valid OpenAPI X.X specification detected”
  • Shows number of endpoints found
  • Displays the OpenAPI version detected
Error Indicators
  • ❌ Red X with “Invalid specification format”
  • Lists up to 3 specific errors found
  • Shows count if more than 3 errors exist

Common Validation Errors

Missing Required Fields
  • info.title is required
  • info.version is required
  • paths object must exist and contain at least one path
Version Issues
  • Only OpenAPI 3.0+ is supported (not Swagger 2.0)
  • Version must be specified in openapi field
Syntax Errors
  • Invalid JSON syntax
  • Invalid YAML syntax
  • Malformed object structures

Generating MCP Servers

Starting Generation

  1. Ensure your specification has been successfully validated
  2. Click “Generate MCP Server” button
  3. The system will show “Generating…” status
  4. Monitor progress in the Deployment Dashboard below

Generation Process

What Happens
  1. System analyzes your OpenAPI specification
  2. Extracts all endpoints, parameters, and schemas
  3. Generates compliant MCP server code
  4. Deploys the server as a live Edge Function
  5. Registers the server in the routing system
  6. Provides you with a live URL
Typical Timeline
  • Validation: 2-5 seconds
  • Code Generation: 10-20 seconds
  • Deployment: 5-15 seconds
  • Total Time: 20-40 seconds

Managing Deployments

Deployment Dashboard

The dashboard shows all your MCP server deployments with: Status Indicators
  • 🟡 Generating: Server is being created and deployed
  • 🟢 Ready: Server is live and ready to use
  • 🔴 Stopped: Server is deployed but inactive
  • ⚠️ Error: Generation failed or runtime error occurred
Deployment Information
  • Name: Based on your API title or generated automatically
  • Status: Current operational state
  • Created: When the deployment was initiated
  • URL: Live endpoint for AI agents (when ready)
  • Endpoints: Number of API endpoints converted

Customizing Resources & Prompts

In addition to tools (auto-generated from your OpenAPI spec), you can define custom Resources and Prompts to enhance how AI agents interact with your MCP server.

Why Use Resources & Prompts?

Resources solve the “context problem” — AI agents often need background information that isn’t in your API:
  • ✅ API documentation and usage guidelines
  • ✅ Configuration schemas and valid values
  • ✅ Business rules and validation logic
  • ✅ Reference data (country codes, status values, error codes, etc.)
Prompts solve the “consistency problem” — standardize how AI agents interact with your API:
  • ✅ Ensure proper error handling patterns
  • ✅ Maintain consistent output formatting
  • ✅ Guide complex multi-step workflows
  • ✅ Enforce business logic in AI interactions
Example Impact:
WithoutWith
AI agent calls wrong endpoint because it doesn’t understand your API structureAI agent reads your API docs resource and makes correct calls every time
AI agent formats responses inconsistentlyAI agent uses your “format_response” prompt template for consistent output
AI agent doesn’t know your business rulesAI agent reads your “business-rules” resource and validates requests correctly

Resources vs Prompts: When to Use Which?

Use Resources When…Use Prompts When…
You have static reference dataYou need dynamic templates
AI needs to read informationAI needs interaction patterns
Content rarely changesContent varies by context
Examples: docs, configs, schemasExamples: email templates, report formats
Quick Decision:
  • 📖 “Does the AI need to read this?” → Resource
  • 📝 “Does the AI need to use this as a template?” → Prompt

Accessing Customization

  1. Find your deployment card in the dashboard
  2. Click the “Customize” button (gear icon)
  3. Navigate to the Resources or Prompts tab (marked with “NEW” badges!)

Defining Resources

Resources are read-only data sources that AI agents can access:
FieldRequiredDescription
NameUnique identifier (lowercase with hyphens, e.g., api-guide)
URIResource identifier (e.g., file://docs/guide.md)
TitleHuman-readable title shown to users
DescriptionExplains what the resource contains and when to use it
MIME TypeContent type (text/markdown, application/json, etc.)
ContentThe actual resource content (Markdown supported)

Resource Examples

Example 1: API Rate Limits
{
  "name": "rate-limits",
  "uri": "file://docs/rate-limits.md",
  "title": "API Rate Limits",
  "description": "Current rate limiting rules - read this before making bulk requests",
  "mime_type": "text/markdown",
  "content": "## Rate Limits\n\n- **Free tier:** 100 requests/hour\n- **Pro tier:** 1000 requests/hour\n- **Header:** Check `X-RateLimit-Remaining`\n- **Retry:** Use `Retry-After` header value"
}
Result: AI agents will automatically respect rate limits and handle 429 errors correctly. Example 2: Configuration Schema
{
  "name": "config-schema",
  "uri": "file://schemas/config.json",
  "title": "Valid Configuration Options",
  "description": "JSON schema for configuration objects - reference this when creating configs",
  "mime_type": "application/json",
  "content": "{\"type\":\"object\",\"properties\":{\"environment\":{\"enum\":[\"dev\",\"staging\",\"prod\"]},\"timeout\":{\"type\":\"number\",\"minimum\":1000,\"maximum\":30000}}}"
}
Result: AI agents will only suggest valid configuration values. Example 3: Error Code Reference
{
  "name": "error-codes",
  "uri": "file://docs/errors.md",
  "title": "Error Code Reference",
  "description": "Complete list of error codes and their meanings",
  "mime_type": "text/markdown",
  "content": "## Error Codes\n\n| Code | Meaning | Action |\n|------|---------|--------|\n| 4001 | Invalid email | Check email format |\n| 4002 | User exists | Try login instead |\n| 5001 | Database error | Retry in 5 seconds |"
}
Result: AI agents can explain error codes to users in plain language.

Defining Prompts

Prompts are reusable templates for structured AI interactions:
FieldRequiredDescription
NameUnique identifier (lowercase with hyphens)
TitleHuman-readable title
DescriptionExplains when to use this prompt
ArgumentsInput parameters with name, required flag, description
TemplateThe prompt text with {{argument_name}} placeholders

Prompt Examples

Example 1: Error Response Formatter
{
  "name": "format-error",
  "title": "Format Error Response",
  "description": "Convert API errors into user-friendly messages",
  "arguments": [
    {"name": "error_code", "required": true, "description": "The error code from API"},
    {"name": "error_message", "required": true, "description": "Raw error message"},
    {"name": "user_action", "required": false, "description": "Suggested fix"}
  ],
  "template": "❌ Error {{error_code}}: {{error_message}}\n\n{{#if user_action}}💡 What to do: {{user_action}}{{/if}}"
}
Result: All error messages follow the same friendly format. Example 2: Multi-Step Workflow Guide
{
  "name": "create-user-workflow",
  "title": "Create User Workflow",
  "description": "Step-by-step guide for creating a new user correctly",
  "arguments": [
    {"name": "email", "required": true, "description": "User's email address"},
    {"name": "role", "required": true, "description": "User role (admin, member, viewer)"}
  ],
  "template": "To create user {{email}} with role {{role}}:\n\n1. First, call POST /users/validate with email={{email}}\n2. If validation passes, call POST /users with {email, role}\n3. Then call POST /users/{id}/send-welcome-email\n4. Finally, return the user ID and confirmation"
}
Result: AI agents follow the correct sequence every time. Example 3: Summary Generator
{
  "name": "summarize-orders",
  "title": "Summarize Customer Orders",
  "description": "Generate a summary of recent orders for a customer",
  "arguments": [
    {"name": "customer_id", "required": true, "description": "Customer's unique ID"},
    {"name": "days", "required": false, "description": "Number of days to look back (default: 30)"}
  ],
  "template": "Fetch orders for customer {{customer_id}} from the last {{days}} days and provide:\n\n1. Total number of orders\n2. Total spend\n3. Most ordered products\n4. Average order value"
}
Result: Consistent, structured order summaries every time.

How AI Agents Use Resources & Prompts

Resources Flow

1. User asks: "What are the rate limits for this API?"
2. AI agent calls: resources/list → sees "rate-limits" resource
3. AI agent calls: resources/read with URI file://docs/rate-limits.md
4. AI agent reads the content and responds with accurate information
You don’t need to do anything — the AI agent handles this automatically!

Prompts Flow

1. User asks: "Send an error message about invalid email"
2. AI agent calls: prompts/list → sees "format-error" prompt
3. AI agent calls: prompts/get with arguments {error_code: "4001", error_message: "Email format invalid"}
4. AI agent receives formatted template and presents it to user
You control the template — the AI agent just fills in the blanks!

Validation Rules

  • Resource names and URIs must be unique within a deployment
  • Prompt names must be unique within a deployment
  • Argument names must be unique within a prompt
  • URIs must follow the format: scheme://path (e.g., file://docs/guide.md)

Common Issues & Solutions

”AI agent isn’t using my resources”

Possible Causes:
  • ✗ Resource name isn’t descriptive enough (use clear names like api-documentation not doc1)
  • ✗ Description doesn’t explain when to use it (add context: “Read this for rate limit information”)
  • ✗ Content is too long (keep under 10KB for best performance)
Solution: Make your resource names and descriptions very explicit about their purpose.

”Prompt template isn’t rendering correctly”

Possible Causes:
  • ✗ Argument names don’t match template placeholders (check {{spelling}})
  • ✗ Required arguments aren’t marked as required
  • ✗ Template has unmatched brackets
Solution: Test your prompt with sample arguments using the API Reference examples.

”Too many resources/prompts”

Best Practice: Start with 3-5 of each. Too many options can slow down AI discovery. Recommendation:
  • Resources: Focus on the most critical reference data
  • Prompts: Create templates for your most common interactions

Limits & Best Practices

Practical Limits

ItemRecommendedMaximum
Resources per deployment5-10No hard limit
Prompts per deployment3-8No hard limit
Resource content size< 10 KB1 MB
Prompt template length< 500 chars10,000 chars
Arguments per prompt2-520

Best Practices

Resources:
  • DO use descriptive names: api-authentication-guide not auth
  • DO keep content focused and concise
  • DO use Markdown for better readability
  • DON’T duplicate information that’s already in your OpenAPI spec
  • DON’T include sensitive data (credentials, keys, secrets)
Prompts:
  • DO use clear placeholder names: {{user_email}} not {{x}}
  • DO provide example values in descriptions
  • DO mark truly required arguments as required
  • DON’T create overly complex templates (split into multiple prompts)
  • DON’T hardcode values that should be arguments
Naming Conventions:
  • Use lowercase with hyphens: user-creation-guide
  • Be specific: stripe-webhook-handler not handler
  • Include context: production-config vs dev-config

Deployment Controls

Start/Stop
  • Click “Start” to activate a stopped deployment
  • Click “Stop” to deactivate a running deployment
  • Stopping saves resources while preserving the deployment
Health Check
  • Click “Health Check” to test server responsiveness
  • Results show response time and status
  • Helps diagnose connectivity or performance issues
View Logs
  • Click “View Logs” to see real-time deployment activity
  • Shows generation progress, errors, and runtime information
  • Logs are automatically refreshed
  • Most recent entries appear at the top
Copy URL
  • Click “Copy URL” to get the MCP server endpoint
  • Use this URL to configure AI agents
  • URLs follow the format: https://[project].supabase.co/functions/v1/mcp-router/[slug]
  • Important: All MCP servers support both OAuth 2.1 and JWT authentication

Analytics Dashboard

Overview

The Analytics Dashboard provides real-time insights into your MCP server usage, performance, and errors. Access it by clicking the Analytics tab in your Dashboard (between “My Deployments” and “Submit to Gallery”).

Key Features

Deployment Selector
  • Choose which deployment to analyze from the dropdown
  • Switch between deployments to compare performance
  • Only shows your own deployments (secured by Row Level Security)
Time Range Filter
  • Last 24 hours: See recent activity and immediate trends
  • Last 7 days: Identify weekly patterns and usage spikes
  • Last 30 days: Analyze long-term trends and growth

Metric Cards

The dashboard displays four key performance indicators: Total Requests
  • Total number of tool invocations in the selected time range
  • Formatted with K/M suffixes for readability (e.g., 1.2K, 5.3M)
  • Helps track overall API usage and adoption
Success Rate
  • Percentage of successful tool calls vs. failed calls
  • Green badge indicates healthy performance (>95%)
  • Shows count of successful requests below percentage
Average Response Time
  • Mean execution time across all tool invocations
  • Displayed in milliseconds (ms) or seconds (s)
  • Lower is better - helps identify performance issues
Failed Requests
  • Count of tool calls that returned errors
  • Shows error rate percentage below count
  • Monitor this to catch API issues early

Interactive Charts

Tool Usage Chart (Bar Chart)
  • Shows top 10 most frequently called tools
  • Bars represent total invocation count per tool
  • Helps identify which endpoints are most popular
  • Useful for optimizing frequently-used tools
Performance Trends (Line Chart)
  • Displays response time trends over time
  • Solid line: Average response time
  • Dashed line: P95 response time (95th percentile)
  • Grouped by hour for granular analysis
  • Helps spot performance degradation or improvements
Error Rate Chart (Area Chart)
  • Shows error percentage over time
  • Y-axis ranges from 0-100%
  • Shaded area makes trends easy to visualize
  • Helps identify when errors started occurring

Using Analytics Effectively

Monitor Performance
  1. Check analytics daily to catch issues early
  2. Compare different time ranges to spot trends
  3. Investigate spikes in error rates immediately
  4. Track response time improvements after optimizations
Optimize Your API
  1. Identify most-used tools from the Tool Usage chart
  2. Prioritize optimization for high-traffic endpoints
  3. Monitor P95 response times to catch outliers
  4. Use error data to fix problematic endpoints
Understand Usage Patterns
  1. Check which tools users call most frequently
  2. Identify peak usage hours from performance trends
  3. Plan capacity based on historical usage data
  4. Validate that expected tools are being used

Empty State

If you see “No analytics data available for this deployment yet”:
  • Make some API calls through your MCP server
  • Data appears within seconds of the first tool invocation
  • Try testing your deployment with Claude or cURL
  • Ensure your deployment status is “Ready”

Data Privacy

  • All analytics data is protected by Row Level Security (RLS)
  • You can only view analytics for your own deployments
  • No cross-user data leakage possible
  • Analytics queries are automatically filtered by user ID

Performance Notes

  • Charts update when you change deployment or time range
  • Data is cached for 60 seconds to improve performance
  • Queries are limited to 1000 records for fast loading
  • Aggregations are performed client-side for the MVP

OAuth Integration for AI Agents

For comprehensive OAuth 2.1 setup instructions, including:
  • Why use OAuth 2.1 with PKCE
  • Step-by-step Claude Desktop configuration
  • Redirect URI configuration for different clients
  • Mobile-responsive authorization flow
  • Security best practices
  • Troubleshooting common issues
See the complete OAuth Integration Guide

Using Generated MCP Servers

Getting Your Deployment Bearer Token

Each MCP server can use bearer token authentication as an alternative to OAuth 2.1. To get your static bearer token (when mcp_auth_method='jwt'): Important: This token is NOT a JWT - it’s a static bearer token that never expires. See the API Reference for authentication details.
  1. View in Deployment Card: When your deployment is ready, you’ll see a “Deployment Authentication Token” section
  2. Show/Hide Token: Click the eye icon to reveal or hide your token
  3. Multiple Copy Options: Use the convenient copy buttons for different formats:
    • Copy Token: Just the raw bearer token
    • Copy Claude Config: Complete claude_desktop_config.json snippet
    • Copy Auth Header: Ready-to-use Authorization header
    • Copy cURL: Test command with authentication
Security Note: This token never expires and acts like an API key. Treat it as a sensitive credential.

MCP Server URLs

Each successful deployment provides a unique authenticated URL:
https://your-project.supabase.co/functions/v1/mcp-router/[your-deployment-slug]
Authentication Required: All MCP server access requires valid authentication for security (OAuth 2.1 recommended, static bearer token also supported).

Configuring AI Agents

Claude Desktop Setup

For OAuth 2.1 Authentication (Recommended):
  1. Open Claude Desktop → Settings → Connectors
  2. Click “Add custom connector”
  3. Enter your deployment details:
    • Name: Tydli - [your-deployment-slug]
    • MCP Server URL: Your MCP router URL
    • OAuth Client ID: (from your deployment dashboard)
    • Authorization URL: https://your-project.supabase.co/functions/v1/mcp-oauth-server/authorize
    • Token URL: https://your-project.supabase.co/functions/v1/mcp-oauth-server/token
  4. Click Connect - your browser will open for authorization
  5. Verify Connection: Ask Claude about available tools to confirm setup
For Static Bearer Token Authentication: Bearer token authentication (when deployment has mcp_auth_method='jwt') requires a custom MCP client implementation that supports bearer token authentication. The token is a static credential (not a JWT) that never expires. Configuration details are available in your deployment dashboard. See the OAuth Integration Guide for complete setup instructions.

Other MCP Clients

Continue.dev Configuration:
{
  "mcpServers": {
    "your-api-name": {
      "transport": {
        "type": "fetch",
        "url": "your-mcp-server-url",
        "headers": {
          "Authorization": "Bearer your-bearer-token-here"
        }
      }
    }
  }
}
Generic HTTP Client Setup:
  • URL: Your MCP server URL from the deployment card
  • Method: POST for MCP calls, GET for server info
  • Headers: Authorization: Bearer your-bearer-token-here (when using static token auth)
  • Content-Type: application/json

Testing Your MCP Server

Quick Testing with cURL

  1. Get Test Command: Click “Copy cURL” in your deployment card
  2. Run Test: Paste and run in your terminal:
curl -X GET "your-mcp-server-url" \
  -H "Authorization: Bearer your-bearer-token" \
  -H "Content-Type: application/json"
  1. Expected Response: Should return MCP server info with available tools

Testing MCP Tool Calls

curl -X POST "your-mcp-server-url" \
  -H "Authorization: Bearer your-bearer-token" \
  -H "Content-Type: application/json" \
  -d '{
    "method": "tools/call",
    "params": {
      "name": "your-tool-name",
      "arguments": {}
    }
  }'

Agent Testing

  1. Configure AI Agent: Use the generated configuration from your deployment card
  2. Restart Agent: Close and reopen your AI agent application
  3. Verify Connection: Ask the agent: “What tools do you have available?”
  4. Test API Calls: Ask the agent to use your API endpoints
  5. Monitor Logs: Check deployment logs for request activity

Authentication Troubleshooting

Token Issues:
  • Expired Token: Tokens are tied to your session - refresh the page if needed
  • Invalid Format: Ensure you’re using the complete Bearer token
  • Copy Errors: Use the copy buttons rather than manual selection
Agent Issues:
  • Configuration: Verify config file syntax and location
  • Agent Support: Ensure your AI agent supports MCP authentication
  • Network: Check firewall/proxy settings aren’t blocking requests
OAuth Redirect URI Issues:
  • “Invalid redirect_uri”:
    • Ensure the redirect URI in your request exactly matches one registered in your OAuth client
    • For Claude Desktop, verify you registered http://127.0.0.1:6277/callback
    • Check for trailing slashes or protocol mismatches (http vs https)
  • “Authorization callback timeout”:
    • Claude Desktop may be using a different port - try registering multiple URIs (6277-6287)
    • Ensure no other application is using port 6277
    • Check firewall settings aren’t blocking localhost connections
  • “Client type mismatch”:
    • Verify you selected the correct client type when registering
    • Claude Desktop users should select “Claude Desktop” option
    • Custom web apps should select “Custom Web App”
Common Error Messages:
  • Authentication required → Missing or invalid bearer token (or OAuth failure)
  • MCP server not found → Check URL and deployment status
  • Rate limit exceeded → Wait before making more requests
  • Invalid redirect_uri → Redirect URI not registered for this client
  • Invalid client_id → Check your OAuth client credentials

Troubleshooting

Upload Issues

File Won’t Upload
  • Check file size (must be under 10MB)
  • Ensure file extension is .json, .yaml, or .yml
  • Verify file isn’t corrupted
Paste Issues
  • Ensure content is valid JSON or YAML
  • Check for invisible characters or encoding issues
  • Try copying from a plain text editor
URL Issues
  • Verify URL is publicly accessible
  • Check that URL returns raw specification content
  • Ensure no authentication is required

Validation Failures

“Invalid specification format”
  • Check OpenAPI version (must be 3.0+)
  • Ensure required fields are present
  • Validate JSON/YAML syntax
Network Errors
  • Check your internet connection
  • Try refreshing the page
  • Contact support if errors persist

Generation Problems

Generation Stuck
  • Check the logs for specific error details
  • Refresh the page to see current status
  • Most generations complete within 2 minutes
Server Not Responding
  • Run a health check to test connectivity
  • Check logs for runtime errors
  • Try stopping and restarting the deployment

Runtime Issues

MCP Server Errors
  1. Check deployment logs for error details
  2. Verify your original OpenAPI spec has valid endpoint definitions
  3. Test individual endpoints to isolate issues
  4. Contact support with specific error messages
AI Agent Connection Issues
  1. Authentication Required: Verify your AI agent supports MCP authentication (OAuth 2.1 or bearer token)
  2. Check that the server status is “Ready”
  3. Ensure your AI agent can handle authenticated MCP endpoints
  4. Review agent-specific MCP authentication configuration requirements
  5. Check deployment logs for authentication error details
Common Authentication Issues
  • Browser Access Blocked: Expected behavior - browsers cannot provide bearer tokens or complete OAuth flows
  • Agent Auth Failure: Ensure agent supports MCP authentication (OAuth 2.1 recommended)
  • Token/OAuth Expired: For bearer tokens (never expire but can be deleted); for OAuth check session refresh
  • Configuration Error: Verify agent authentication configuration is correct

Best Practices

OpenAPI Specification Guidelines

Structure
  • Include comprehensive endpoint descriptions
  • Define clear parameter names and types
  • Provide example values where helpful
  • Use consistent naming conventions
Documentation
  • Write clear operation summaries
  • Include detailed parameter descriptions
  • Document expected response formats
  • Specify error conditions

Deployment Management

Naming
  • Use descriptive names for your deployments
  • Include version information when relevant
  • Keep names consistent with your API naming
Monitoring
  • Regularly check deployment status
  • Review logs for any warnings or errors
  • Monitor health check results
  • Keep track of which AI agents are using each deployment

Security Considerations

MCP Authentication Requirements
  • All MCP servers require authentication for secure access (OAuth 2.1 recommended, static bearer token alternative)
  • OAuth tokens are session-based and automatically invalidate
  • Bearer tokens (when mcp_auth_method='jwt') never expire - treat as API keys
  • Tokens provide access only to your own deployments and data
  • Never share authentication credentials publicly or in unsecured locations
Token Management Best Practices
  • Use Copy Buttons: Always use the provided copy buttons in deployment cards
  • Secure Storage: Store credentials securely in your AI agent configuration files
  • OAuth Preferred: Use OAuth 2.1 when possible for better security (auto-refresh, expiration)
  • Bearer Token Security: If using static bearer tokens, rotate them regularly by recreating deployments
  • Regular Rotation: Log out and back in periodically to refresh tokens
  • Environment Isolation: Use different tokens for development and production setups
API Credentials
  • API credentials are encrypted when stored in the database
  • Credentials are never logged or exposed in error messages
  • Consider rotating API keys regularly for enhanced security
  • Monitor API usage through your provider’s dashboard
Access Control
  • Generated servers use secure sandboxed execution environment
  • User data isolated through Row Level Security policies
  • Request rate limiting enforced per user
  • MCP server URLs require valid authentication
  • Users can only access their own deployments and data
  • Deployment logs contain no sensitive credential information
  • All database access governed by RLS policies

Support and Feedback

Getting Help

Check the Documentation
  • Review this user guide thoroughly
  • Reference the troubleshooting section
  • Check deployment logs for specific errors
Common Solutions
  • Refresh the page if interface seems stuck
  • Clear browser cache for persistent issues
  • Verify internet connectivity
  • Try a different browser if problems persist

Providing Feedback

When reporting issues, please include:
  • Your deployment name and ID
  • Specific error messages from logs
  • Steps to reproduce the problem
  • Your OpenAPI specification (if comfortable sharing)

Happy deploying! Transform your APIs into AI-ready MCP servers in seconds. 🚀