Summary
Add an OpenAI decomposer to the LLM gateway alongside the existing Anthropic one. The gateway architecture already supports multiple providers — internal/gateway/anthropic/ is self-contained with a clean decomposer pattern. This is implementation work, not design work.
Scope
- New package:
internal/gateway/openai/
- Decompose OpenAI chat completion requests into Keep calls:
llm.request — model, token estimate, system message summary
llm.tool_result — tool role messages (function call results)
llm.tool_use — assistant tool_calls in responses
llm.response — completion summary (finish_reason, tool_call count)
- Support both regular and streaming (
stream: true with SSE) responses
- Mutation patching for redact actions (same pattern as Anthropic)
- Gateway config:
provider: openai
OpenAI-specific considerations
- Chat completions use
messages[] with role-based content (system, user, assistant, tool)
- Tool calls are in
tool_calls[] on assistant messages, not content blocks
- Streaming uses
data: [DONE] sentinel vs Anthropic's event types
- Function calling has both legacy
function_call and modern tool_calls — support modern only
Out of scope
- OpenAI Responses API (different shape entirely — future issue if needed)
- Assistants API / threads
References
- Existing Anthropic decomposer:
internal/gateway/anthropic/
- Gateway proxy:
internal/gateway/proxy.go
- PRD R-3: "OpenAI-compatible as a fast follow"
gateway/config/config.go:54 — placeholder comment
Summary
Add an OpenAI decomposer to the LLM gateway alongside the existing Anthropic one. The gateway architecture already supports multiple providers —
internal/gateway/anthropic/is self-contained with a clean decomposer pattern. This is implementation work, not design work.Scope
internal/gateway/openai/llm.request— model, token estimate, system message summaryllm.tool_result— tool role messages (function call results)llm.tool_use— assistant tool_calls in responsesllm.response— completion summary (finish_reason, tool_call count)stream: truewith SSE) responsesprovider: openaiOpenAI-specific considerations
messages[]with role-based content (system, user, assistant, tool)tool_calls[]on assistant messages, not content blocksdata: [DONE]sentinel vs Anthropic's event typesfunction_calland moderntool_calls— support modern onlyOut of scope
References
internal/gateway/anthropic/internal/gateway/proxy.gogateway/config/config.go:54— placeholder comment