Conversation
#33 In short, it'll 1. [Frontend] Recognize that user is trying to add a citation (trigger text is `\cite{`) 2. [Frontend] Temporarily suppress default Overleaf dropdown suggestions 3. [Frontend] Get the last sentence as context for LLM 4. [Backend] Fetch bibliography in `.bib` files as raw text, and remove irrelevant fields to save tokens 5. [Backend] Call XtraMCP to get paper abstract, using paper title as key 6. [Backend] Query a fast LLM (hardcoded to `gpt-5.2` for now) to get at most 3 citation keys 7. [Frontend] Suppress default Overleaf tab-completion to allow users to accept citation suggestions
Co-authored-by: Junyi Hou <junyi@xtras3.tail08d22c.ts.net> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
### Key changes: - Select models by ID instead of slug so users can have multiple API keys for the same slug - Add loading spinner for save/edit actions
|
@wjiayis Hi Jia Yi, I recall you had some great feedback to improve user experience / make it more intuitive for new users. Could you kindly help to check if your concerns have been addressed too? Thanks! |
4ndrelim
left a comment
There was a problem hiding this comment.
Clarification required.
Also, do we know the issue with deepseek and GLM? If not, no worries, we can proceed but raise an issue for those 2 first.
| if customModel != nil { | ||
| params := openaiv3.ChatCompletionNewParams{ | ||
| Model: customModel.Slug, | ||
| Temperature: openaiv3.Float(float64(customModel.Temperature)), |
There was a problem hiding this comment.
Have you tested with varying temperatures for all the models? We might have to handle some corner case here. For example, i believe GPT-5.1 only allows temperature setting of 1.0. Any attempt to configure it otherwise will lead to an error. I am unsure if the other models have this peculiar behaviour too.
There was a problem hiding this comment.
I haven't extensively tested yet, but I believe the range is generally up to 2.0? Some models (from what I know GPT like you mentioned) go up to 1.0. Perhaps I could add an extra cautionary message regarding this in the tooltip?
There was a problem hiding this comment.
Also still unsure of the issue with Deepseek and GLM
| ModelName string | ||
| Endpoint string | ||
| APIKey string | ||
| ModelName string |
There was a problem hiding this comment.
i recall our last discussion that this field is to be unique so users can differentiate different API keys of the same slugs. Is this still the case? If so, how do we ensure it is unique?
There was a problem hiding this comment.
Yep, as mentioned in our pm, it's currently being enforced on the frontend
|
I noticed in the screenshot here there are 3 custom models. May i verify the expected behaviour: |
@4ndrelim only the models that have been added successfully will show up in the selection menu (models that have missing API keys/endpoints are invalid and can't be added in the first place) |


Summary
Adds BYOK features (including base URL, API key and param configurations), also added changes as per suggestions from #157.
Tested Providers
Stable: GPT, Claude, Gemini, MiniMax, OpenRouter
Unstable: DeepSeek, GLM
Screenshots
Closes #118 Closes #149 Closes #157