Phase 1 — Foundation (3/4)
Create server-side API endpoints that proxy requests to AI providers. The server acts as a stateless proxy — API keys pass through but are never stored or logged.
Scope
POST /api/ai/validate
Validates an API key by making a lightweight test call to the provider.
Request:
{
"provider": "openai",
"apiKey": "sk-..."
}
Response:
{ "valid": true }
// or
{ "valid": false, "error": "Invalid API key" }
Validation methods per provider:
| Provider |
Method |
Endpoint |
| OpenAI |
GET models list |
/v1/models |
| Anthropic |
POST minimal message (1 max token) |
/v1/messages |
| Google |
GET models list |
/v1beta/models?key={key} |
| Groq |
GET models list |
/openai/v1/models |
| OpenRouter |
GET models list |
/api/v1/models |
POST /api/ai/generate
Proxies a text generation request to the selected AI provider.
Request:
{
"provider": "openai",
"apiKey": "sk-...",
"model": "gpt-4o-mini",
"prompt": "Generate 3 OG image titles for...",
"systemPrompt": "You are a copywriting assistant...",
"maxTokens": 300,
"temperature": 0.8
}
Response:
{
"content": "...",
"model": "gpt-4o-mini",
"usage": { "inputTokens": 45, "outputTokens": 120 }
}
The endpoint must:
- Validate request body (provider, apiKey, model, prompt are required)
- Build the correct request format per provider (OpenAI-style vs Anthropic vs Google)
- Forward to the provider's API
- Normalize the response into a common format
- Return appropriate error codes (400 for bad request, 401 for invalid key, 502 for provider error)
Security
- Never log or store the
apiKey field
- Never send API keys to analytics/telemetry
- Keys exist only in memory during the request lifecycle
- CORS headers should match existing OGCOPS API config
Error Handling
- Provider API down → 502 with
{ "error": "Provider unavailable" }
- Invalid API key → 401 with
{ "error": "Invalid API key", "valid": false }
- Rate limited → 429 with
{ "error": "Rate limited by provider" }
- Bad request body → 400 with validation error details
Acceptance Criteria
Phase 1 — Foundation (3/4)
Create server-side API endpoints that proxy requests to AI providers. The server acts as a stateless proxy — API keys pass through but are never stored or logged.
Scope
POST /api/ai/validateValidates an API key by making a lightweight test call to the provider.
Request:
{ "provider": "openai", "apiKey": "sk-..." }Response:
{ "valid": true } // or { "valid": false, "error": "Invalid API key" }Validation methods per provider:
/v1/models/v1/messages/v1beta/models?key={key}/openai/v1/models/api/v1/modelsPOST /api/ai/generateProxies a text generation request to the selected AI provider.
Request:
{ "provider": "openai", "apiKey": "sk-...", "model": "gpt-4o-mini", "prompt": "Generate 3 OG image titles for...", "systemPrompt": "You are a copywriting assistant...", "maxTokens": 300, "temperature": 0.8 }Response:
{ "content": "...", "model": "gpt-4o-mini", "usage": { "inputTokens": 45, "outputTokens": 120 } }The endpoint must:
Security
apiKeyfieldError Handling
{ "error": "Provider unavailable" }{ "error": "Invalid API key", "valid": false }{ "error": "Rate limited by provider" }Acceptance Criteria
/api/ai/validatecorrectly validates keys for all 5 providers/api/ai/generatecorrectly proxies requests to all 5 providers