Skip to content

Model Config: Add model configuration table and API endpoints#669

Merged
vprashrex merged 13 commits intomainfrom
feature/model-config
Apr 15, 2026
Merged

Model Config: Add model configuration table and API endpoints#669
vprashrex merged 13 commits intomainfrom
feature/model-config

Conversation

@vprashrex
Copy link
Copy Markdown
Collaborator

@vprashrex vprashrex commented Mar 12, 2026

Summary

Target issue is #635
Explain the motivation for making this change. What existing problem does the pull request solve?

Currently, there’s no structured way to expose model capabilities and provider support from the kaapi-backend. All models are shown to users, since different models support different parameters. This leads to issues like passing unsupported params (e.g., temperature for GPT-5), causing 400 errors, and the frontend has no way to validate configs.
To fix this, a model_config table is introduced with:
Example structure:

provider: openai
model: gpt-4o-mini
config (JSON): parameter schema 
input_modalities / output_modalities: supported I/O types (e.g., TEXT, IMAGE, AUDIO)
pricing (JSON): {
response: {
 "input_token_cost":,
 "output_token_cost":
},
"batch": {
  "input_token_cost":,
 "output_token_cost":
},
"audio":{
 }
}
is_active 

Now, the backend returns only curated models with their config schema

Checklist

Before submitting a pull request, please ensure that you mark these task.

  • Ran fastapi run --reload app/main.py or docker compose up in the repository root and test.
  • If you've fixed a bug or added code that is tested and has test cases.

Notes

Please add here if any other information is required for the reviewer.

Summary by CodeRabbit

  • New Features

    • Added endpoints to list active model configurations (provider filter, pagination) and to retrieve detailed model configuration data (metadata, parameters, modalities, timestamps)
  • Documentation

    • Added API docs for both endpoints with parameters and example JSON responses
  • Tests

    • Added tests for listing/filtering, single-model retrieval, not-found handling, and default-model lookup
  • Chores

    • Added DB migration to create and seed the model configuration table

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a model configuration subsystem: DB migration and SQLModel schemas, CRUD helpers, two authenticated FastAPI endpoints to list and fetch model configs (registered in main router), API documentation pages, and tests covering listing, filtering, retrieval, 404, and default-model lookup.

Changes

Cohort / File(s) Summary
Database Schema & Migration
backend/app/models/model_config.py, backend/app/alembic/versions/051_create_model_config_table.py, backend/app/models/__init__.py
New SQLModel types (ModelConfigBase, ModelConfig, ModelConfigPublic, ModelConfigListPublic) with timestamps and constraints; Alembic migration creates global.model_config table, seeds 19 rows, and updates identity sequence; models package exports added classes.
Data Access Layer
backend/app/crud/model_config.py
New CRUD functions: get_active_models() (paginated, optional provider filter), get_model_config() (single active model lookup), and get_default_model_for_type() (fetch default by completion type).
API Routes & Integration
backend/app/api/routes/model_config.py, backend/app/api/main.py
New APIRouter /models exposing GET /models/ and GET /models/{provider}/{model_name} using auth and session deps; routes included in main API router.
API Documentation
backend/app/api/docs/model_config/list_models.md, backend/app/api/docs/model_config/get_model.md
Added docs for list and get endpoints, parameters, response schemas, and example JSON responses.
Tests
backend/app/tests/api/routes/test_model_config.py
New tests for listing, provider filtering with limit, single-model retrieval, not-found 404, and unit test for get_default_model_for_type().

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant API as "FastAPI /models"
  participant CRUD as "crud.model_config"
  participant DB as "Postgres (global.model_config)"

  Client->>API: GET /api/v1/models?provider=...&skip=&limit=
  API->>CRUD: get_active_models(provider, skip, limit)
  CRUD->>DB: SELECT ... FROM global.model_config WHERE is_active [AND provider=...]
  DB-->>CRUD: rows
  CRUD-->>API: list of ModelConfigPublic + count
  API-->>Client: 200 OK (APIResponse{data, count})

  Client->>API: GET /api/v1/models/{provider}/{model_name}
  API->>CRUD: get_model_config(provider, model_name)
  CRUD->>DB: SELECT ... WHERE provider=... AND model_name=... AND is_active
  DB-->>CRUD: row / empty
  CRUD-->>API: ModelConfigPublic / None
  API-->>Client: 200 OK (model) / 404 Not Found
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 Soft paws tap keys in moonlit code,
I plant configs down a tidy road,
Endpoints hop, docs gleam with light,
Tests thump softly through the night,
A happy burrow — functions all in row.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Model Config: Add model configuration table and API endpoints' accurately summarizes the main changes, which include a new model_config table, CRUD operations, API endpoints, models, and comprehensive tests.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/model-config

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vprashrex vprashrex self-assigned this Mar 12, 2026
@vprashrex vprashrex added the enhancement New feature or request label Mar 12, 2026
@vprashrex vprashrex linked an issue Mar 12, 2026 that may be closed by this pull request
@vprashrex vprashrex moved this to In Progress in Kaapi-dev Mar 12, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 12, 2026

Codecov Report

❌ Patch coverage is 99.53052% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
backend/app/crud/model_config.py 97.61% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
backend/app/crud/model_config.py (1)

14-20: Use SQLAlchemy boolean predicates instead of == True.

These filters work, but ModelConfig.is_active == True is the noisy/non-idiomatic form here and is already tripping Ruff. Prefer ModelConfig.is_active.is_(True) in all three queries.

♻️ Proposed fix
     statement = (
         select(ModelConfig)
         .where(
-            ModelConfig.is_active == True,
+            ModelConfig.is_active.is_(True),
             ModelConfig.default_for == completion_type,
         )
         .limit(1)
     )

-    statement = select(ModelConfig).where(ModelConfig.is_active == True)
+    statement = select(ModelConfig).where(ModelConfig.is_active.is_(True))

     statement = select(ModelConfig).where(
         ModelConfig.provider == provider,
         ModelConfig.model_name == model_name,
-        ModelConfig.is_active == True,
+        ModelConfig.is_active.is_(True),
     )

Also applies to: 32-32, 45-49

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/crud/model_config.py` around lines 14 - 20, Replace non-idiomatic
boolean comparisons using "== True" with SQLAlchemy boolean predicates: change
occurrences like ModelConfig.is_active == True to
ModelConfig.is_active.is_(True). Update this in all three query constructions
that reference ModelConfig.is_active (the select where block filtering by
ModelConfig.is_active and ModelConfig.default_for == completion_type and the
other two similar queries), ensuring you use .is_(True) for each boolean filter
while leaving other predicates (e.g., ModelConfig.default_for ==
completion_type) unchanged.
backend/app/api/routes/model_config.py (2)

19-25: Confirm the auth gate here is intentional.

Both handlers depend on AuthContextDep, and backend/app/api/deps.py:38-91 makes that a hard 401/403 gate. If the model catalog needs to be available before login or project selection, this will block it. If auth is intentional, rename the parameter to _auth_context (or suppress ARG001) so the dependency-only use is explicit.

Also applies to: 39-41

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/routes/model_config.py` around lines 19 - 25, The handler
list_models (and the other handler at lines 39-41) currently take AuthContextDep
as a required parameter which enforces the hard 401/403 gate from
backend/app/api/deps.py; decide whether the model catalog should be public or
protected and update accordingly: if it must be public, remove or make the auth
dependency optional (so the handlers don't require AuthContextDep), otherwise
keep the dependency but rename the parameter to _auth_context (or add an ARG001
suppression) to make the dependency-only usage explicit; reference the
AuthContextDep type and the list_models function (and the second handler) when
making the change.

26-30: Make count unambiguous for paginated responses.

After applying skip/limit, count=len(models) only reports the current page size. If clients use this field as pagination metadata, it will be misleading; either return the total matching rows or rename/drop the field.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/routes/model_config.py` around lines 26 - 30, The code
returns count=len(models) after applying skip/limit (in the route using
get_active_models, APIResponse.success_response, and ModelConfigListPublic),
which makes pagination metadata ambiguous; change the endpoint to return the
total number of matching rows (not just the page length) by adding a separate
count query (e.g., implement/get a get_active_models_count or modify
get_active_models to return (items, total)), then pass that total into
ModelConfigListPublic as count (or rename/drop the field if you prefer
API-breaking behavior); update the route to use the new total value instead of
len(models) so clients receive unambiguous pagination info.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/alembic/versions/050_create_model_config_table.py`:
- Line 20: Add explicit return type hints to the Alembic entrypoint functions:
update the upgrade() and downgrade() function definitions to include return
annotations (e.g., def upgrade() -> None: and def downgrade() -> None:) so both
functions have explicit return types per the project typing guidelines; ensure
the annotations appear on the existing upgrade and downgrade functions in the
migration module.

In `@backend/app/models/model_config.py`:
- Around line 11-16: The model schema uses Python-side Literals (e.g., the
provider Field in model_config.py and the other Literal-backed fields around
lines 53-65 and 78-83) but does not enforce those allowed values at the DB
level; add SQL CHECK constraints to the SQLAlchemy model (e.g., on the provider
column and the default_for/other Literal-backed columns) to restrict values to
the same allowed set and update the Alembic migration to create the matching
CHECK constraints so values inserted outside the application cannot violate the
contract; ensure the constraint names are unique and descriptive and that the
model's sa_column includes sa.CheckConstraint(...) matching the Literal choices.

---

Nitpick comments:
In `@backend/app/api/routes/model_config.py`:
- Around line 19-25: The handler list_models (and the other handler at lines
39-41) currently take AuthContextDep as a required parameter which enforces the
hard 401/403 gate from backend/app/api/deps.py; decide whether the model catalog
should be public or protected and update accordingly: if it must be public,
remove or make the auth dependency optional (so the handlers don't require
AuthContextDep), otherwise keep the dependency but rename the parameter to
_auth_context (or add an ARG001 suppression) to make the dependency-only usage
explicit; reference the AuthContextDep type and the list_models function (and
the second handler) when making the change.
- Around line 26-30: The code returns count=len(models) after applying
skip/limit (in the route using get_active_models, APIResponse.success_response,
and ModelConfigListPublic), which makes pagination metadata ambiguous; change
the endpoint to return the total number of matching rows (not just the page
length) by adding a separate count query (e.g., implement/get a
get_active_models_count or modify get_active_models to return (items, total)),
then pass that total into ModelConfigListPublic as count (or rename/drop the
field if you prefer API-breaking behavior); update the route to use the new
total value instead of len(models) so clients receive unambiguous pagination
info.

In `@backend/app/crud/model_config.py`:
- Around line 14-20: Replace non-idiomatic boolean comparisons using "== True"
with SQLAlchemy boolean predicates: change occurrences like
ModelConfig.is_active == True to ModelConfig.is_active.is_(True). Update this in
all three query constructions that reference ModelConfig.is_active (the select
where block filtering by ModelConfig.is_active and ModelConfig.default_for ==
completion_type and the other two similar queries), ensuring you use .is_(True)
for each boolean filter while leaving other predicates (e.g.,
ModelConfig.default_for == completion_type) unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 44029a83-9a0c-41d5-8d67-c62c7e008119

📥 Commits

Reviewing files that changed from the base of the PR and between 48b0a6b and df0e840.

📒 Files selected for processing (8)
  • backend/app/alembic/versions/050_create_model_config_table.py
  • backend/app/api/docs/model_config/get_model.md
  • backend/app/api/docs/model_config/list_models.md
  • backend/app/api/main.py
  • backend/app/api/routes/model_config.py
  • backend/app/crud/model_config.py
  • backend/app/models/__init__.py
  • backend/app/models/model_config.py

depends_on = None


def upgrade():
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add return annotations to the Alembic entrypoints.

upgrade and downgrade are new Python functions and currently miss return type hints.

✏️ Proposed fix
-def upgrade():
+def upgrade() -> None:
     op.create_table(
         "model_config",
         ...
     )

-def downgrade():
+def downgrade() -> None:
     op.drop_table("model_config", schema="global")
As per coding guidelines, `**/*.py`: Always add type hints to all function parameters and return values in Python code.

Also applies to: 131-131

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/alembic/versions/050_create_model_config_table.py` at line 20,
Add explicit return type hints to the Alembic entrypoint functions: update the
upgrade() and downgrade() function definitions to include return annotations
(e.g., def upgrade() -> None: and def downgrade() -> None:) so both functions
have explicit return types per the project typing guidelines; ensure the
annotations appear on the existing upgrade and downgrade functions in the
migration module.

Comment on lines +11 to +16
provider: Literal["openai", "google"] = Field(
default="openai",
sa_column=sa.Column(
sa.String, nullable=False, comment="provider name (e.g. openai, google)"
),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Encode the allowed domains in the schema, not just the type hints.

Literal only constrains Python-side validation; the backing columns are still plain VARCHAR. A row inserted outside the API can persist unsupported provider or default_for values and leave the API serving data its own model contract says is impossible. Add CheckConstraints here and mirror them in the migration.

🛡️ Proposed fix
 class ModelConfig(ModelConfigBase, table=True):
     __tablename__ = "model_config"
     __table_args__ = (
+        sa.CheckConstraint(
+            "provider IN ('openai', 'google')",
+            name="ck_model_config_provider",
+        ),
+        sa.CheckConstraint(
+            "default_for IS NULL OR default_for IN ('text', 'stt', 'tts')",
+            name="ck_model_config_default_for",
+        ),
         sa.UniqueConstraint("provider", "model_name"),
         {"schema": "global"},
     )

Also applies to: 53-65, 78-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/models/model_config.py` around lines 11 - 16, The model schema
uses Python-side Literals (e.g., the provider Field in model_config.py and the
other Literal-backed fields around lines 53-65 and 78-83) but does not enforce
those allowed values at the DB level; add SQL CHECK constraints to the
SQLAlchemy model (e.g., on the provider column and the default_for/other
Literal-backed columns) to restrict values to the same allowed set and update
the Alembic migration to create the matching CHECK constraints so values
inserted outside the application cannot violate the contract; ensure the
constraint names are unique and descriptive and that the model's sa_column
includes sa.CheckConstraint(...) matching the Literal choices.

@Prajna1999 Prajna1999 linked an issue Mar 13, 2026 that may be closed by this pull request
@Prajna1999 Prajna1999 removed this from Kaapi-dev Mar 13, 2026
@vprashrex vprashrex closed this Mar 18, 2026
@Ayush8923 Ayush8923 deleted the feature/model-config branch April 1, 2026 08:56
@vprashrex vprashrex restored the feature/model-config branch April 13, 2026 01:24
@vprashrex vprashrex reopened this Apr 13, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/alembic/versions/051_create_model_config_table.py`:
- Around line 1-17: The migration file has the wrong revision identifiers:
update the module-level variables so revision = "051" and down_revision = "050"
(and if present update any matching revision string in the file
header/docstring) to align the migration ordering; modify the revision and
down_revision symbols in the file to those exact values so Alembic picks up the
correct parent migration.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 27a6c79b-1439-45e3-a000-a189cfae5198

📥 Commits

Reviewing files that changed from the base of the PR and between 8a51cf5 and 2b9ffef.

📒 Files selected for processing (1)
  • backend/app/alembic/versions/051_create_model_config_table.py

Comment thread backend/app/alembic/versions/052_create_model_config_table.py
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/tests/api/routes/test_model_config.py`:
- Around line 16-20: The tests under
backend/app/tests/api/routes/test_model_config.py rely on seeded DB state;
replace those implicit assumptions by creating deterministic model records via
the project's factories (e.g., ModelFactory or equivalent) inside each test or a
fixture and then assert against those created instances: create at least one
active model before the count/active assertions (targeting the assertions that
currently check body["data"]["count"] and all(m["is_active"] ...)), create
specific model(s) named/typed to represent the openai/gpt-4o and default "text"
model before the tests that assert their presence, and update assertions to
check for the factory-created records rather than global seed data (use the
test's created model IDs/names in the response expectations and ensure cleanup
or transactional fixtures are used).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 29372d38-b949-4f10-aa44-fb8b6b662119

📥 Commits

Reviewing files that changed from the base of the PR and between 2b9ffef and 6e322c9.

📒 Files selected for processing (2)
  • backend/app/alembic/versions/051_create_model_config_table.py
  • backend/app/tests/api/routes/test_model_config.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/app/alembic/versions/051_create_model_config_table.py

Comment on lines +16 to +20
assert response.status_code == 200
body = response.json()
assert body["success"] is True
assert body["data"]["count"] > 0
assert all(m["is_active"] for m in body["data"]["data"])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make these tests deterministic with factory-created model records

Lines [19], [47]-[48], and [66]-[68] currently assume seeded data already exists (active models, openai/gpt-4o, and a default "text" model). This can make CI flaky across environments. Please create test data via factories in each test (or shared factory fixtures), then assert against those created records.

Suggested direction (example)
 def test_get_model(
-    client: TestClient, superuser_token_headers: dict[str, str]
+    client: TestClient,
+    superuser_token_headers: dict[str, str],
+    model_config_factory,
 ) -> None:
+    created = model_config_factory(
+        provider="openai",
+        model_name="test-model",
+        is_active=True,
+        default_for="text",
+    )
     response = client.get(
-        f"{settings.API_V1_STR}/models/openai/gpt-4o",
+        f"{settings.API_V1_STR}/models/{created.provider}/{created.model_name}",
         headers=superuser_token_headers,
     )

As per coding guidelines backend/app/tests/**/*.py: Use factory pattern for test fixtures in backend/app/tests/.

Also applies to: 31-34, 45-49, 66-68

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/tests/api/routes/test_model_config.py` around lines 16 - 20, The
tests under backend/app/tests/api/routes/test_model_config.py rely on seeded DB
state; replace those implicit assumptions by creating deterministic model
records via the project's factories (e.g., ModelFactory or equivalent) inside
each test or a fixture and then assert against those created instances: create
at least one active model before the count/active assertions (targeting the
assertions that currently check body["data"]["count"] and all(m["is_active"]
...)), create specific model(s) named/typed to represent the openai/gpt-4o and
default "text" model before the tests that assert their presence, and update
assertions to check for the factory-created records rather than global seed data
(use the test's created model IDs/names in the response expectations and ensure
cleanup or transactional fixtures are used).

Comment thread backend/app/crud/model_config.py Outdated
Comment thread backend/app/api/routes/model_config.py Outdated
Comment thread backend/app/models/model_config.py Outdated
Comment thread backend/app/api/routes/model_config.py Outdated
Comment thread backend/app/api/routes/model_config.py Outdated
Comment thread backend/app/crud/model_config.py Outdated
Comment thread backend/app/crud/model_config.py Outdated
},
"input_modalities": ["TEXT", "IMAGE"],
"output_modalities": ["TEXT"],
"default_for": "text",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this is removed, can you update doc

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default_for is removed but I didn't removed the modalities

Comment thread backend/app/alembic/versions/052_create_model_config_table.py Outdated


def upgrade():
op.create_table(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add inserted_at and updated_at

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the table does have this two columns

@vprashrex vprashrex moved this to In Review in Kaapi-dev Apr 15, 2026
@vprashrex vprashrex requested a review from Ayush8923 April 15, 2026 07:10
Comment thread backend/app/api/routes/model_config.py Outdated
Comment on lines +23 to +24
skip: int = 0,
limit: int = 100,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If skip and limit are being used here, then we should also include a has_more parameter in the metadata in response. Otherwise, pagination won’t work properly in this case.

If it’s required, we can keep it; otherwise, we can remove it if it’s not needed.

@vprashrex vprashrex merged commit 6cb55bf into main Apr 15, 2026
3 checks passed
@vprashrex vprashrex deleted the feature/model-config branch April 15, 2026 07:51
@github-project-automation github-project-automation bot moved this from In Review to Closed in Kaapi-dev Apr 15, 2026
@coderabbitai coderabbitai bot mentioned this pull request Apr 16, 2026
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

Status: Closed

Development

Successfully merging this pull request may close these issues.

Config Management: Models(OpenAI) TTS/STT: API for exposing provider/model and config

4 participants