Skip to content

Config Management: Models(OpenAI) #635

@AkhileshNegi

Description

@AkhileshNegi

Is your feature request related to a problem?

Currently, all OpenAI models are exposed to the user. Most are irrelevant, and each model family accepts different configuration parameters — reasoning models (o-series, GPT-5) don't support temperature/top_p at all. Passing unsupported params causes 400 Bad Request.
Request errors. There's no way for the UI to know which params are valid for a given model.

Describe the solution you'd like

Create a new database table model_config with:

  • model (string) — model ID (e.g., gpt-4o)
  • config (JSON) — accepted parameters with type, default, range, and description
  • provider(string) - OpenAI(for now)

The API returns only curated models + their config schema. The frontend dynamically renders input fields based on the config JSON — no hardcoded params in the UI.

Proposed schema for config JSON

{
"temperature":        { "type": "float", "default": 1.0, "min": 0.0, "max": 2.0, "description": "Controls randomness. Lower = more
deterministic." },
"top_p":              { "type": "float", "default": 1.0, "min": 0.0, "max": 1.0, "description": "Nucleus sampling. Use either this
or temperature, not both." },
"frequency_penalty":  { "type": "float", "default": 0.0, "min": -2.0, "max": 2.0, "description": "Penalizes tokens proportional to
their frequency." },
"presence_penalty":   { "type": "float", "default": 0.0, "min": -2.0, "max": 2.0, "description": "Penalizes tokens that have
appeared at all." },
"max_completion_tokens": { "type": "int", "default": 4096, "min": 1, "max": 16384, "description": "Max tokens in the response." }
}

For reasoning models, config would look like:

{
"reasoning_effort":      { "type": "enum", "default": "medium", "options": ["low", "medium", "high"], "description": "How long the
model spends reasoning. Higher = better but slower." },
"max_completion_tokens": { "type": "int", "default": 4096, "min": 1, "max": 100000, "description": "Max tokens in the response
(includes reasoning tokens)." }
}

Models: gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o4-mini, o3-mini, gpt-5, gpt-5-mini

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

Status

Closed

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions