One HTTP endpoint. Every frontier model. Billed in dollars.

Ship agents without juggling SDKs, usage tiers, or failover logic. WPIC routes each call to the best model for the task, surfaces transparent per-call cost, and hands you Anthropic-style errors.

curl
curl https://studio.wpic.ai/v1/generate/text \
  -H "Authorization: Bearer $WPIC_API_KEY" \
  -H "Idempotency-Key: $(uuidgen)" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Write a short, punchy tagline for a Berlin coffee roaster.",
    "options": { "maxTokens": 64 }
  }'

Scoped keys

Per-endpoint scopes (generate:text, generate:image, billing:read, ...). Revoke any time. SHA-256 hashed at rest.

Idempotency built-in

Every mutating call takes an Idempotency-Key; responses cached 24h. Same key → same response, never double-charged.

Rate limits in headers

X-RateLimit-Limit, Remaining, Reset. 429 Retry-After on block. Per-API-key sliding window.

Router, not proxy

Send a task; we pick Claude, GPT-5, FLUX, or a fallback. You don't chase deprecations.

Streaming text

Standard SSE. Billed on tokens actually delivered, not max_tokens reserved.

Webhooks

generation.completed, balance.low, subscription.renewed. HMAC signed, dual-secret rotation supported.

Error shape

Anthropic-style JSON. Match on error.type; enumerated values documented per endpoint.

{
  "error": {
    "type": "insufficient_balance",
    "message": "Organization balance of $1.20 cannot cover estimated call cost of $2.50.",
    "param": null,
    "request_id": "req_abc123"
  }
}