Documentation
Everything you need to integrate TokenSea into your application.
Quick Start
Get up and running in under 5 minutes. TokenSea is fully compatible with the Anthropic API and OpenAI SDKs.
1. Get your API key
Sign up at TokenSea and create an API key from the dashboard. Your key starts with tsk-.
2. Configure your client
Point your Anthropic SDK to the TokenSea endpoint:
from anthropic import Anthropic
client = Anthropic(
api_key="tsk-your-key-here",
base_url="https://api.tokensea.dev/v1"
)
3. Make your first request
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, TokenSea!"}
]
)
print(message.content)
Authentication
All API requests require authentication via your API key. Include it in the Authorization header as a Bearer token, or use the x-api-key header (Anthropic style).
# OpenAI-style Bearer token
curl https://api.tokensea.dev/v1/chat/completions \
-H "Authorization: Bearer tsk-your-key-here" \
-H "Content-Type: application/json"
# Anthropic-style x-api-key
curl https://api.tokensea.dev/v1/messages \
-H "x-api-key: tsk-your-key-here" \
-H "Content-Type: application/json"
API Keys
Manage your API keys from the dashboard. Each key has its own quota and rate limits based on your plan.
| Property | Type | Description |
|---|---|---|
| key | string | Full key shown only once at creation (format: tsk-...) |
| name | string | Human-readable label for the key |
| status | enum | active or disabled |
| quota | number | Total quota in cents |
| usedQuota | number | Used quota in cents |
Chat Completions
OpenAI-compatible endpoint for chat applications. Supports streaming and tool use.
POST /v1/chat/completions
curl https://api.tokensea.dev/v1/chat/completions \
-H "Authorization: Bearer tsk-your-key-here" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing."}
]
}'
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model alias (e.g. claude-sonnet-4-20250514) |
| messages | array | Yes | Array of message objects with role and content |
| max_tokens | integer | Yes | Maximum tokens to generate |
| stream | boolean | No | Enable SSE streaming (default: false) |
| temperature | float | No | Sampling temperature 0-1 (default: 1) |
| tools | array | No | Tool definitions for function calling |
Messages (Anthropic)
Native Anthropic Messages API endpoint. If you're already using the Anthropic SDK, just change the base_url.
POST /v1/messages
curl https://api.tokensea.dev/v1/messages \
-H "x-api-key: tsk-your-key-here" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 4096,
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
List Models
Retrieve all available models and their capabilities.
GET /v1/models
curl https://api.tokensea.dev/v1/models \
-H "Authorization: Bearer tsk-your-key-here"
Response:
{
"object": "list",
"data": [
{
"id": "claude-sonnet-4-20250514",
"object": "model",
"owned_by": "tokensea"
}
]
}
Streaming
Both endpoints support Server-Sent Events (SSE) streaming. Set stream: true in your request.
stream = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
stream=True,
messages=[{"role": "user", "content": "Write a poem."}]
)
for event in stream:
print(event.type, event.content)
Protocol Conversion
TokenSea automatically converts between OpenAI and Anthropic API formats. This means you can:
- Use the OpenAI SDK with Anthropic models via
/v1/chat/completions - Use the Anthropic SDK with OpenAI models via
/v1/messages - Mix models from different providers in the same application
Rate Limits
Rate limits are applied per API key based on your plan tier:
| Plan | QPS | RPM | TPM |
|---|---|---|---|
| Free | 2 | 20 | 40,000 |
| Starter | 10 | 120 | 200,000 |
| Pro | 30 | 600 | 2,000,000 |
| Enterprise | Custom | Custom | Custom |
When rate limits are exceeded, the API returns a 429 Too Many Requests response with a Retry-After header.
Billing & Quota
TokenSea uses a quota-based billing system. Your quota is measured in cents (1/100 CNY) and deducted in real-time based on token usage.
- Input tokens are billed at the model's input rate
- Output tokens are billed at the model's output rate
- Quota is deducted after each request completes
- Streaming requests are billed once the stream ends
- Failed requests (5xx errors) are not billed
Check your quota and usage from the dashboard, or use the user API:
GET /api/user/self
Error Codes
| Code | HTTP | Description |
|---|---|---|
| INVALID_API_KEY | 401 | The API key is invalid or revoked |
| QUOTA_EXCEEDED | 402 | Insufficient quota to process request |
| FORBIDDEN | 403 | You don't have access to this resource |
| MODEL_NOT_FOUND | 404 | The requested model is not available |
| RATE_LIMITED | 429 | Rate limit exceeded, retry after the indicated time |
| UPSTREAM_ERROR | 502 | The upstream provider returned an error |
| NO_AVAILABLE_NODE | 503 | All upstream nodes are unavailable |
| INTERNAL_ERROR | 500 | An unexpected server error occurred |
