Skip to main content

Overview

This document provides a comprehensive guide to all error codes you may encounter when using the Wisdom Gate API. Understanding these error codes will help you implement proper error handling and troubleshooting in your applications.
Error Response FormatAll error responses follow a consistent format:
{
  "error": {
    "message": "Error description",
    "type": "error_type",
    "param": "parameter_name",
    "code": "error_code"
  }
}

HTTP Status Codes

400 Bad Request

Code Explanation: Please check your request format, usually it’s a client-side error. Common Causes:
  • Invalid JSON format in request body
  • Missing required parameters
  • Invalid parameter values or types
  • Malformed request structure
How to Fix:
  1. Verify your request body is valid JSON
  2. Check that all required parameters are included
  3. Validate parameter types match the API specification
  4. Review the OpenAPI specification for correct request format
Example:
{
  "error": {
    "message": "Invalid request format",
    "type": "invalid_request_error",
    "param": "messages",
    "code": "invalid_format"
  }
}

401 Invalid Token

Code Explanation: API key verification failed. Try changing models to test if your API key is correct; if changing models works normally, please contact the administrator for feedback and processing. Common Causes:
  • Missing or invalid API key
  • API key expired or revoked
  • Incorrect Authorization header format
How to Fix:
  1. Verify your API key is correct and active
  2. Check the Authorization header format: Bearer YOUR_API_KEY
  3. Try using a different model to test if the API key works
  4. If changing models works, contact administrator for assistance
  5. Regenerate your API key if necessary
Example:
{
  "error": {
    "message": "Invalid API key",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}

403 Token Group XXX Has Been Disabled

Code Explanation: Usually a token permission issue. If you still get an error after creating and using a new token, you need to contact the administrator to check. For example, O1 series models do not support the system parameter. Common Causes:
  • Token group has been disabled
  • Insufficient permissions for the requested operation
  • Model-specific parameter restrictions (e.g., O1 series doesn’t support system parameter)
How to Fix:
  1. Create and use a new token
  2. Check if your token has the required permissions
  3. Verify model-specific parameter restrictions
  4. Contact administrator if the issue persists
Example:
{
  "error": {
    "message": "Token Group XXX Has Been Disabled",
    "type": "permission_error",
    "code": "token_group_disabled"
  }
}

404 Not Found

Code Explanation: Please check if the Base URL is filled in correctly, try adding /v1 or the last slash /. Common Causes:
  • Incorrect endpoint URL
  • Missing /v1 prefix in the path
  • Missing trailing slash
  • Invalid resource ID (e.g., video_id, model name)
How to Fix:
  1. Verify the endpoint URL is correct
  2. Ensure the path includes /v1 prefix (e.g., /v1/chat/completions)
  3. Check if a trailing slash is required
  4. Validate resource IDs exist and are correct
  5. Review the API reference documentation for correct endpoints
Example:
{
  "error": {
    "message": "Resource not found",
    "type": "invalid_request_error",
    "code": "not_found"
  }
}

413 Request Entity Too Large

Code Explanation: The prompt may be too long. Please shorten your prompt and try again, confirm if a shorter prompt can be called normally. Common Causes:
  • Request body exceeds size limits
  • Prompt text is too long
  • Too many messages in conversation history
  • Large file attachments
How to Fix:
  1. Shorten your prompt text
  2. Reduce the number of messages in conversation history
  3. Remove unnecessary context or messages
  4. Split large requests into smaller chunks
  5. Compress or reduce file sizes if using file attachments
Example:
{
  "error": {
    "message": "Request entity too large",
    "type": "invalid_request_error",
    "code": "request_too_large"
  }
}

429 Current Group Upstream Load Is Saturated

Code Explanation: OpenAI has rate limits for individual accounts, 429 indicates that a backend account’s concurrent usage is too high and has encountered rate limiting. Please continue to try calling. Common Causes:
  • Too many concurrent requests
  • Rate limit exceeded for the account
  • Backend account saturation
How to Fix:
  1. Implement exponential backoff retry logic
  2. Reduce request frequency
  3. Add delays between requests
  4. Use request queuing for batch operations
  5. Continue retrying with appropriate backoff
Example Retry Implementation:
import requests
import time
import random

def make_request_with_retry(url, headers, data, max_retries=5):
    for attempt in range(max_retries):
        try:
            response = requests.post(url, headers=headers, json=data)
            
            if response.status_code == 429:
                # Exponential backoff with jitter
                wait_time = (2 ** attempt) + random.random()
                print(f"Rate limited. Waiting {wait_time:.2f} seconds...")
                time.sleep(wait_time)
                continue
            
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            if attempt < max_retries - 1:
                wait_time = (2 ** attempt) + random.random()
                time.sleep(wait_time)
            else:
                raise
Example Error Response:
{
  "error": {
    "message": "Current Group Upstream Load Is Saturated",
    "type": "rate_limit_error",
    "code": "rate_limit_exceeded"
  }
}

500 Internal Server Error

Code Explanation: Server internal error. Could be an issue with the proxy server or OpenAI server, unrelated to the user. Please try again, if multiple errors occur please contact the administrator. Common Causes:
  • Proxy server issues
  • Upstream service (OpenAI) server problems
  • Temporary service disruption
How to Fix:
  1. Retry the request after a short delay
  2. Check service status if available
  3. If errors persist, contact administrator
  4. Implement retry logic with exponential backoff
Example:
{
  "error": {
    "message": "Internal server error",
    "type": "server_error",
    "code": "internal_error"
  }
}

503 No Available Channel for Model XXXX Under Current Group NNN

Code Explanation: A management issue with the proxy platform backend. Please contact the administrator to add this model and try calling again. If multiple errors occur, please contact the administrator. Common Causes:
  • Model not available in your token group
  • Model configuration issue on the backend
  • Model temporarily unavailable
How to Fix:
  1. Verify the model name is correct
  2. Check if the model is available in your plan
  3. Contact administrator to add the model to your group
  4. Try using an alternative model if available
Example:
{
  "error": {
    "message": "No Available Channel for Model gpt-4 Under Current Group 123",
    "type": "service_unavailable",
    "code": "model_unavailable"
  }
}

504 Gateway Timeout

Code Explanation: Gateway timeout, failed to get a response from the upstream server within the specified time. Please try again, for multiple errors please contact the administrator. Common Causes:
  • Upstream server (OpenAI) taking too long to respond
  • Network connectivity issues
  • Request timeout exceeded
How to Fix:
  1. Retry the request
  2. Check network connectivity
  3. Reduce request complexity or size
  4. Increase timeout settings if possible
  5. Contact administrator if errors persist
Example:
{
  "error": {
    "message": "Gateway timeout",
    "type": "timeout_error",
    "code": "gateway_timeout"
  }
}

524 Connection Timeout

Code Explanation: The server did not complete the request within the specified time, possibly caused by congestion in the wisdom gate channel. Please try again, for multiple errors please contact the administrator. Common Causes:
  • Server congestion
  • Network latency issues
  • Request processing timeout
How to Fix:
  1. Retry the request after a delay
  2. Check for service congestion
  3. Reduce request size or complexity
  4. Contact administrator if multiple errors occur
Example:
{
  "error": {
    "message": "Connection timeout",
    "type": "timeout_error",
    "code": "connection_timeout"
  }
}

Best Practices for Error Handling

1. Implement Comprehensive Error Handling

Always handle errors gracefully in your application:
import requests

def handle_api_request(url, headers, data):
    try:
        response = requests.post(url, headers=headers, json=data)
        response.raise_for_status()
        return response.json()
    except requests.exceptions.HTTPError as e:
        if e.response.status_code == 429:
            # Handle rate limiting
            return handle_rate_limit(e.response)
        elif e.response.status_code == 401:
            # Handle authentication error
            return handle_auth_error(e.response)
        elif e.response.status_code >= 500:
            # Handle server errors
            return handle_server_error(e.response)
        else:
            # Handle other client errors
            return handle_client_error(e.response)
    except requests.exceptions.RequestException as e:
        # Handle network errors
        return handle_network_error(e)

2. Use Exponential Backoff for Retries

For transient errors (429, 500, 503, 504, 524), implement exponential backoff:
import time
import random

def retry_with_backoff(func, max_retries=5, base_delay=1):
    for attempt in range(max_retries):
        try:
            return func()
        except Exception as e:
            if attempt < max_retries - 1:
                delay = (base_delay * (2 ** attempt)) + random.random()
                time.sleep(delay)
            else:
                raise

3. Log Errors for Debugging

Always log error details for troubleshooting:
import logging

logging.error(f"API Error: {error_code} - {error_message}")
logging.error(f"Request: {request_data}")
logging.error(f"Response: {response_data}")

4. Provide User-Friendly Error Messages

Translate technical error codes into user-friendly messages:
ERROR_MESSAGES = {
    400: "Please check your request format",
    401: "Invalid API key. Please verify your credentials",
    403: "Permission denied. Please check your token permissions",
    404: "Resource not found. Please verify the endpoint URL",
    413: "Request too large. Please reduce the prompt size",
    429: "Rate limit exceeded. Please try again later",
    500: "Server error. Please try again",
    503: "Service unavailable. Please contact support",
    504: "Request timeout. Please try again",
    524: "Connection timeout. Please try again"
}

Error Code Summary Table

Status CodeError TypeRetry RecommendedUser Action Required
400Bad RequestNoFix request format
401Invalid TokenNoVerify API key
403Permission DeniedNoCheck permissions or contact admin
404Not FoundNoVerify endpoint URL
413Request Too LargeNoReduce request size
429Rate LimitYesRetry with backoff
500Server ErrorYesRetry or contact admin
503Service UnavailableYesContact admin
504Gateway TimeoutYesRetry
524Connection TimeoutYesRetry

Getting Help

If you continue to encounter errors after following the troubleshooting steps:
  1. Check the Model Catalog for model availability and requirements
  2. Review the API Reference for correct endpoint usage
  3. Contact Support with:
    • Error code and message
    • Request details (without sensitive data)
    • Steps to reproduce
    • Your API key group information (if applicable)