Documentation

Everything you need to integrate TokenSea into your application.

Quick Start

Get up and running in under 5 minutes. TokenSea is fully compatible with the Anthropic API and OpenAI SDKs.

1. Get your API key

Sign up at TokenSea and create an API key from the dashboard. Your key starts with tsk-.

2. Configure your client

Point your Anthropic SDK to the TokenSea endpoint:

from anthropic import Anthropic

client = Anthropic(
    api_key="tsk-your-key-here",
    base_url="https://api.tokensea.dev/v1"
)

3. Make your first request

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, TokenSea!"}
    ]
)
print(message.content)
That's it! TokenSea handles routing, failover, and billing automatically. You get the same Anthropic API experience with added reliability.

Authentication

All API requests require authentication via your API key. Include it in the Authorization header as a Bearer token, or use the x-api-key header (Anthropic style).

# OpenAI-style Bearer token
curl https://api.tokensea.dev/v1/chat/completions \
  -H "Authorization: Bearer tsk-your-key-here" \
  -H "Content-Type: application/json"

# Anthropic-style x-api-key
curl https://api.tokensea.dev/v1/messages \
  -H "x-api-key: tsk-your-key-here" \
  -H "Content-Type: application/json"
Keep your keys secret! Never expose your API key in client-side code. Always use environment variables or a secure secrets manager.

API Keys

Manage your API keys from the dashboard. Each key has its own quota and rate limits based on your plan.

PropertyTypeDescription
keystringFull key shown only once at creation (format: tsk-...)
namestringHuman-readable label for the key
statusenumactive or disabled
quotanumberTotal quota in cents
usedQuotanumberUsed quota in cents

Chat Completions

OpenAI-compatible endpoint for chat applications. Supports streaming and tool use.

POST /v1/chat/completions

curl https://api.tokensea.dev/v1/chat/completions \
  -H "Authorization: Bearer tsk-your-key-here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "max_tokens": 4096,
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain quantum computing."}
    ]
  }'

Parameters

ParameterTypeRequiredDescription
modelstringYesModel alias (e.g. claude-sonnet-4-20250514)
messagesarrayYesArray of message objects with role and content
max_tokensintegerYesMaximum tokens to generate
streambooleanNoEnable SSE streaming (default: false)
temperaturefloatNoSampling temperature 0-1 (default: 1)
toolsarrayNoTool definitions for function calling

Messages (Anthropic)

Native Anthropic Messages API endpoint. If you're already using the Anthropic SDK, just change the base_url.

POST /v1/messages

curl https://api.tokensea.dev/v1/messages \
  -H "x-api-key: tsk-your-key-here" \
  -H "Content-Type: application/json" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "max_tokens": 4096,
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

List Models

Retrieve all available models and their capabilities.

GET /v1/models

curl https://api.tokensea.dev/v1/models \
  -H "Authorization: Bearer tsk-your-key-here"

Response:

{
  "object": "list",
  "data": [
    {
      "id": "claude-sonnet-4-20250514",
      "object": "model",
      "owned_by": "tokensea"
    }
  ]
}

Streaming

Both endpoints support Server-Sent Events (SSE) streaming. Set stream: true in your request.

stream = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=4096,
    stream=True,
    messages=[{"role": "user", "content": "Write a poem."}]
)

for event in stream:
    print(event.type, event.content)

Protocol Conversion

TokenSea automatically converts between OpenAI and Anthropic API formats. This means you can:

  • Use the OpenAI SDK with Anthropic models via /v1/chat/completions
  • Use the Anthropic SDK with OpenAI models via /v1/messages
  • Mix models from different providers in the same application
Automatic routing: TokenSea maps model names and request formats automatically. You don't need to worry about which provider a model uses — just specify the model alias.

Rate Limits

Rate limits are applied per API key based on your plan tier:

PlanQPSRPMTPM
Free22040,000
Starter10120200,000
Pro306002,000,000
EnterpriseCustomCustomCustom

When rate limits are exceeded, the API returns a 429 Too Many Requests response with a Retry-After header.

Billing & Quota

TokenSea uses a quota-based billing system. Your quota is measured in cents (1/100 CNY) and deducted in real-time based on token usage.

  • Input tokens are billed at the model's input rate
  • Output tokens are billed at the model's output rate
  • Quota is deducted after each request completes
  • Streaming requests are billed once the stream ends
  • Failed requests (5xx errors) are not billed

Check your quota and usage from the dashboard, or use the user API:

GET /api/user/self

Error Codes

CodeHTTPDescription
INVALID_API_KEY401The API key is invalid or revoked
QUOTA_EXCEEDED402Insufficient quota to process request
FORBIDDEN403You don't have access to this resource
MODEL_NOT_FOUND404The requested model is not available
RATE_LIMITED429Rate limit exceeded, retry after the indicated time
UPSTREAM_ERROR502The upstream provider returned an error
NO_AVAILABLE_NODE503All upstream nodes are unavailable
INTERNAL_ERROR500An unexpected server error occurred