Rate Limits & Quotas
This document provides comprehensive information about rate limits, quotas, and usage restrictions on the Tydli platform.Overview
Rate limiting protects the platform from abuse while ensuring fair resource allocation across all users. Limits apply at multiple levels: per-user, per-deployment, and per-OAuth-client.User Rate Limits
Free Tier
Default limits for free tier accounts:| Metric | Limit | Time Window |
|---|---|---|
| Deployments | 3 active | Total |
| Endpoints per Deployment | 50 | Per deployment |
| Total Endpoints | 150 | Across all deployments |
| Requests per Hour | 250 | Rolling 60-minute window |
| Requests per Month | 1,500 | Calendar month |
| Gallery Deployments | 5 | Per hour |
| Gallery Deployments | 20 | Per day |
| AI Document Processing | 5 successful | Per day |
Paid Tiers
See your dashboard or billing page for plan-specific limits. Paid plans offer:- More active deployments
- Higher endpoints per deployment limit
- Increased request quotas
- Higher gallery deployment limits
- Priority support
- Custom limits available for enterprise
| Plan | Gallery Deploys/Hour | Gallery Deploys/Day |
|---|---|---|
| Free | 5 | 20 |
| Pro | 20 | 100 |
| Enterprise | 50 | 500 |
Checking Your Limits
View current limits and usage:- Dashboard Header: Shows current usage as percentage
- Settings Page: Detailed breakdown of limits
- API Response: Rate limit headers on every request
Request Rate Limits
Per-User Limits
Hourly Limit:- Window: Rolling 60-minute window
- Enforcement: Per user_id
- Applies to: All MCP server requests
- Reset: Continuous rolling window
- Window: Calendar month (UTC)
- Enforcement: Per user_id
- Applies to: All MCP server requests
- Reset: 1st day of month at 00:00 UTC
Rate Limit Response
When rate limit is exceeded, you’ll receive:- Check
Retry-Afterheader (seconds until reset) - Implement exponential backoff
- Cache responses to reduce requests
- Consider upgrading plan if consistently hitting limits
OAuth Rate Limits
OAuth-Specific Limits
OAuth endpoints have separate rate limits to prevent brute-force attacks:| Endpoint | Limit | Time Window |
|---|---|---|
/register | 20 requests | Per hour per IP |
/authorize | 20 requests | Per hour per IP |
/token | 10 requests | Per minute per client |
OAuth Limit Response
OAuth Best Practices
✅ DO:- Cache access tokens until expiration
- Use refresh tokens to get new access tokens
- Implement exponential backoff on 429 responses
- Monitor token usage in your application
- Request new tokens for every API call
- Ignore refresh token flow
- Retry immediately on 429 errors
- Create multiple OAuth clients unnecessarily
Deployment Limits
Active Deployments
- Free Tier: 10 active deployments
- Paid Tiers: See your plan details
- Active (counts toward limit):
generating,deploying,ready - Inactive (doesn’t count):
stopped,error,archived
- Cannot create new deployments until existing ones are stopped or deleted
- Error message:
"Maximum active deployments reached" - Solution: Stop unused deployments or upgrade plan
Endpoints per Deployment
- Free Tier: 50 endpoints per deployment
- Total across all deployments: 100 endpoints
- Enforcement: At deployment creation time
- If your OpenAPI spec exceeds endpoint limits, consider:
- Splitting into multiple smaller specs
- Removing rarely-used endpoints
- Upgrading to a higher tier
AI Processing Limits
Document Processing
AI document processing (converting PDFs, Word docs, etc. to OpenAPI specs):| Metric | Limit | Time Window |
|---|---|---|
| Successful Requests | 5 | Per day (UTC) |
| Max Document Size | 10 MB | Per file |
| Supported Formats | PDF, DOCX, DOC, TXT, MD | - |
| Max Pages | 50 pages | Per document |
AI Limit Response
Checking AI Usage
Quota Management
Monitoring Usage
Real-Time Monitoring:- Dashboard: Header shows current usage percentage
- Deployment Logs: Track individual request counts
- Usage Analytics: Detailed breakdown per deployment
Usage Optimization
Reduce Request Count:- Cache responses: Store frequently-accessed data
- Batch operations: Combine multiple calls when possible
- Use webhooks: Instead of polling for changes
- Implement pagination: Request smaller data sets
- Stop unused deployments: Free up quota
- Consolidate APIs: Combine related specs
- Remove redundant endpoints: Trim unused operations
- Pre-process documents: Clean up before upload
- Use OpenAPI directly: Skip AI if you have specs
- Batch document conversion: Plan conversions efficiently
Plan Upgrades
When to Upgrade
Consider upgrading if you:- Consistently hit rate limits
- Need more than 10 active deployments
- Require higher endpoint counts
- Process more than 5 AI documents daily
- Need priority support
How to Upgrade
- Navigate to Billing page in dashboard
- Review available plans and features
- Select plan that fits your needs
- Update payment information
- Instant upgrade (no downtime)
- Limits increase immediately after upgrade
- Usage tracking continues without reset
- Existing deployments remain operational
- No data migration required
Rate Limit Headers
All API responses include rate limit headers:Troubleshooting
”Rate limit exceeded” errors
Immediate Solutions:- Check
Retry-Afterheader for wait time - Implement exponential backoff in your code
- Review recent usage in dashboard
- Consider upgrading plan if persistent
- Cache responses to reduce redundant requests
- Batch operations to minimize API calls
- Optimize client code to avoid unnecessary requests
- Monitor usage trends and plan capacity
”Maximum deployments reached”
Solutions:- Stop unused deployments (status → stopped)
- Delete archived deployments
- Consolidate similar APIs into one deployment
- Upgrade to plan with higher deployment limit
”Too many endpoints”
Solutions:- Remove unused endpoints from OpenAPI spec
- Split large API into multiple deployments
- Upgrade to plan with higher endpoint limit
- Use separate deployments for different API versions
AI processing quota exceeded
Solutions:- Wait until midnight UTC for quota reset
- Use pre-existing OpenAPI specs when available
- Pre-process documents to reduce complexity
- Upgrade to plan with higher AI limits
Database Functions Reference
Check Rate Limits
Get User Limits
Check Deployment Capacity
Get Current Usage
Support
Getting Help
If you experience rate limiting issues:- Check Documentation: Review this guide and Troubleshooting
- View Logs: Check deployment logs for specific error details
- Monitor Dashboard: Track usage patterns in analytics
- Contact Support: Provide request_id from error response
Enterprise Plans
Need custom limits? Contact us for enterprise pricing:- Custom deployment limits
- Dedicated infrastructure
- Higher rate limits
- SLA guarantees
- Priority support
- Custom integrations
References
- User Guide - Getting started guide
- API Reference - Complete API documentation
- Troubleshooting - Common issues and solutions
- Security Best Practices - Security guidelines