-
Notifications
You must be signed in to change notification settings - Fork 73
Multi-Provider LLM Support with Manual Analysis API #1971
Copy link
Copy link
Open
Labels
goPull requests that update go codePull requests that update go codejavaPull requests that update java codePull requests that update java codejavascriptPull requests that update javascript codePull requests that update javascript code
Description
Describe the feature
Enable SOC-AI plugin to work with multiple LLM providers (OpenAI, Anthropic, Ollama, Azure, custom) instead of being hardcoded to OpenAI. Additionally, expose an HTTP API endpoint for manual alert analysis submissions, independent of the automatic gRPC pipeline.
Use Case
- Organizations using Anthropic Claude or self-hosted Ollama instead of OpenAI
- Security analysts who need to manually submit specific alerts for AI analysis
- Environments where automatic analysis is disabled but on-demand analysis is needed
- Teams requiring custom LLM endpoints with specific authentication headers
Proposed Solution
- Generic LLM configuration: URL, model, authType (custom-headers/none), maxTokens, customHeaders
- Auto-detect provider from URL (e.g., "anthropic.com" → Anthropic format)
- Support different request/response formats per provider
- Add HTTP API server on port 8090:
- POST /api/v1/analyze - Submit alert for analysis (async)
- GET /health - Health check
- GET /api/v1/metrics - API metrics
- AutoAnalyze config flag to enable/disable automatic processing (manual API always works)
Other Information
No response
Acknowledgements
- I may be able to implement this feature request
- This feature might incur a breaking change
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
goPull requests that update go codePull requests that update go codejavaPull requests that update java codePull requests that update java codejavascriptPull requests that update javascript codePull requests that update javascript code
Type
Projects
Status
🏗 In progress