Releases: RightNow-AI/openfang
v0.5.6
Critical Fix
- Version sync: Desktop app and workspace version now correctly report v0.5.5+. Users stuck on v0.5.1 should be able to update. Tauri config was hardcoded at 0.1.0 since initial commit.
New Features
-
SSRF allowlist: Self-hosted/K8s users can now configure
ssrf_allowed_hostsin config.toml to allow agents to reach internal services. Metadata endpoints (169.254.169.254, etc.) remain unconditionally blocked.[tools.web_fetch] ssrf_allowed_hosts = ["*.olares.com", "10.0.0.0/8"]
-
Expanded embedding auto-detection: Now probes 6 API key providers (OpenAI, Groq, Mistral, Together, Fireworks, Cohere) before falling back to local providers (Ollama, vLLM, LM Studio). Clear warning when no embedding provider is available.
Bug Fixes
- Ollama context window: Discovered models now default to 128K context / 16K output (was 32K/4K). Better reflects modern models like Qwen 3.5.
Full Changelog: v0.5.5...v0.5.6
v0.5.5
Bug Fixes
- #771 Qwen/OpenAI-compat tool_calls orphaning after context overflow. Smart drain boundaries + streaming repair.
- #811 LINE webhook signature validation. Raw bytes for HMAC, secret trimming, debug logging.
- #752 Local skill install: TUI parsing fix, hot-reload via /api/skills/reload, ClawHub reload.
- #772 exec_policy mode=full now bypasses approval gate for shell_exec.
- #661 Chat streaming interrupts (closed as resolved by v0.5.3 reactivity fixes).
Full Changelog: v0.5.4...v0.5.5
v0.5.4
Bug Fixes
- #875 Install script now correctly fetches latest release version
- #872 Session endpoint returns full tool results (removed 2000-char truncation)
- #867 agent_send/agent_spawn timeout increased to 600s (was 120s)
- #824 Doctor correctly counts workspace skills that override bundled skills
- #833 Model switching respects explicit provider via find_model_for_provider()
- #766 Closed as resolved by heartbeat fixes
Stats
- All tests passing
- Live tested with daemon
Full Changelog: v0.5.3...v0.5.4
v0.5.3 β 19 Bug Fixes (3 rounds)
What's Changed
This release resolves 19 bugs across runtime, kernel, CLI, Web UI, and hands β all verified with live daemon testing.
Runtime & Drivers
- #834 Remove 3 decommissioned Groq models (
gemma2-9b-it,llama-3.2-1b/3b-preview) - #805 Ollama streaming parser handles both
reasoning_contentandreasoningfields - #845 Model fallback chain retries with
fallback_modelson ModelNotFound (404) - #785 Gemini streaming SSE parser handles
\r\nline endings β fixes infinite empty retry loop - #774
tool_use.inputalways normalized to JSON object β fixes Anthropic API "invalid dictionary" errors - #856 Custom model names preserved β user-defined models take priority over builtins (vLLM, etc.)
Kernel & Heartbeat
- #844 Heartbeat skips idle agents that never received a message β no more crash-recover loops
- #848 Hand continuous interval changed from 60s to 3600s β prevents credit waste
- #851/#808 Global
~/.openfang/skills/loaded for all agents; workspace skills properly override globals
CLI
- #826
openfang doctorreportsall_ok=falsewhen provider key is rejected (401/403) - #823
doctor --jsonoutputs clean JSON to stdout, tracing to stderr, BrokenPipe handled - #825 Doctor surfaces blocked workspace skills count in injection scan (no more false "all clean")
- #828
skill installdetects Git URLs (https://,git@) and clones before installing
Web Dashboard
- #767 Workflows page scrollable (flex layout fix)
- #802 Model dropdown handles object options β no more
[object Object]for Ollama - #816 Spawn wizard provider dropdown loads dynamically from
/api/providers(43 providers) - #770 Chat streaming renders in real-time (Alpine.js splice reactivity + stale WS guard)
WebSocket & API
- #836 Tool events include
idfield for concurrent call correlation
Hands
- #820 Browser Hand checks
python3beforepythonβ works on modern Linux distros
Stats
- 2,186+ tests passing, zero clippy warnings
- All fixes verified with live daemon testing
Full Changelog: v0.5.1...v0.5.3
v0.5.2 β 12 Bug Fixes
What's Changed
Bug Fixes (12 issues resolved)
Runtime & Drivers
- #834 Remove 3 decommissioned Groq models (
gemma2-9b-it,llama-3.2-1b-preview,llama-3.2-3b-preview) - #805 Ollama streaming parser now handles both
reasoning_contentandreasoningfields for thinking models (Qwen 3.5, etc.) - #845 Model fallback chain now retries with configured
fallback_modelson ModelNotFound (404) instead of panicking
Kernel & Heartbeat
- #844 Heartbeat monitor skips idle agents that never received a message β no more infinite crash-recover loops
- #848 Hand continuous mode interval changed from 60s to 3600s to prevent credit waste on idle polling
CLI (Doctor)
- #826
openfang doctornow reportsall_ok=falsewhen a provider key is rejected (401/403) - #823
openfang doctor --jsonoutputs clean JSON to stdout (tracing goes to stderr), BrokenPipe handled gracefully
Web Dashboard
- #767 Workflows list page is now scrollable (flex layout fix)
- #802 Model dropdown no longer renders
[object Object]for Ollama models - #816 Agent spawn wizard provider dropdown loads dynamically from
/api/providers(43 providers, was hardcoded 18) - #836 WebSocket tool events now include tool call ID for correct concurrent call correlation
Hands
- #820 Browser Hand requirements check now tries
python3beforepython, fixing detection on modern Linux distros
Stats
- All 829+ tests passing
- Zero clippy warnings
- Live tested with daemon
Full Changelog: v0.5.1...v0.5.2
v0.5.1 β Community Contributions
9 community PRs merged after strict review (24 PRs reviewed, 11 rejected, 4 closed).
Fixes
- Dashboard settings page loading state fix (#750)
- KaTeX loaded on demand to prevent first-paint blocking (#748)
- Provider model normalization β display names resolve through catalog (#714)
- Invisible approval requests now visible with history, badge, and polling (#713)
- Matrix
auto_accept_invitesnow configurable, defaults to false (security) (#711)
Dependencies
- docker/build-push-action 6 β 7 (#741)
- docker/setup-buildx-action 3 β 4 (#740)
- roxmltree 0.20 β 0.21 (#744)
- zip 2.4 β 4.6 (#742)
Full diff: v0.5.0...v0.5.1
v0.5.0 β Milestone Release
29 bugs fixed, 6 features shipped, 100+ PRs reviewed, 65+ issues resolved.
Features
- Image generation pipeline (DALL-E/GPT-Image)
- WeCom channel adapter
- Docker sandbox runtimes
- Shell skill runtime
- Slack unfurl links support
- Release-fast build profile
Improvements
- Channel agent re-resolution
- Stable hand agent IDs
- Async session save
- Vault wiring for credentials
- Telegram formatting improvements
- Mastodon polling fix
- Chromium no-sandbox root support
- Tool error guidance in agent loop
- Agent rename fix
- Codex id_token support
Community
- Community docs and fixes (multiple rounds)
- WhatsApp setup documentation
- CI action bumps
- Docker build args
- Lockfile sync
- Docs link fixes
Full diff: v0.4.3...v0.5.0
v0.4.9
v0.4.9
Bug Fixed
- Image pipeline (#686): REST API and WebSocket now pass image attachments as
content_blocksdirectly to the LLM viasend_message_with_handle_and_blocks()/send_message_streaming(). Previously images were injected as a separate session message and never reached vision models in the current turn. All 3 API entry points (REST, WebSocket, channels) now use the same flow.
Docs
- Added community troubleshooting FAQ: Docker setup, Caddy basicauth, embedding model config, email allowed_senders, Z.AI/GLM-5 config, Kimi 2.5, OpenRouter free models, Claude Code integration, trader hand permissions, multiple Telegram bots workaround.
Full changelog since v0.4.4
26 bugs fixed, 6 features shipped, 100+ PRs reviewed, 65+ issues resolved across v0.4.4βv0.4.9.
v0.4.8
v0.4.8
Bugs Fixed
- Fix HandCategory TOML parse error β added Finance + catch-all Other variant (#717)
- Fix LINE token detection heuristic β long tokens (>80 chars) recognized as direct values (#729)
- Fix General Assistant max_iterations too low β bumped from 50 to 100 (#719)
- Fix knowledge_query SQL parameter binding mismatch (#638)
- Fix WhatsApp Cloud API silently swallowing send errors (#707)
- Fix dashboard provider dropdown missing local providers (#683)
Previous (v0.4.5βv0.4.7)
- Fix Gemini infinite loop on Thinking-only responses (#704)
- Fix tool_blocklist not detected on daemon restart (#666)
- Fix MCP credentials from .env/vault (#660)
- Fix image base64 compaction storms (#648)
- Fix phantom action hallucination (#688)
- Fix desktop app .env loading (#687)
- Fix duplicate sessions (#651)
- Fix Anthropic null tool_use input (#636)
- Fix temperature for reasoning models (#640)
- Fix OpenRouter prefix on fallbacks (#630)
- Fix streaming metering persistence (#627)
- Fix MCP dash names (#616)
- Fix deepseek-reasoner multi-turn (#618)
- Fix NO_REPLY leak to channels (#614)
- Fix skill install button (#625)
- Fix cron delivery (#601)
Features
- Azure OpenAI provider (#631)
- LaTeX rendering in chat (#622)
- PWA support (#621)
- WeCom channel adapter (#629)
- Shell/Bash skill runtime (#624)
- DingTalk Stream adapter (#353)
- Feishu/Lark unified adapter (#329)
- Parakeet MLX speech-to-text (#607)
- Codex GPT-5.4 (#608)
- 100+ community PRs reviewed and merged
v0.4.7
Bugs fixed: - Fix WhatsApp Cloud API silently swallowing errors on Image/File/Location sends (#707) - Fix dashboard provider dropdown hardcoded β now includes all 14 cloud + 4 local providers (#683) - Fix knowledge_query SQL parameter binding mismatch β queries now return matching entities (#638) Previous (v0.4.6): - Fix Gemini infinite loop on Thinking-only responses (#704) - Fix tool_blocklist not detected on daemon restart (#666) - Fix MCP servers not receiving credentials from .env/vault (#660) - Fix image base64 causing compaction storms (#648) - Fix phantom action hallucination (#688) - Fix desktop app not loading .env files (#687) - Fix duplicate sessions from session ID mismatch (#651) - Fix Anthropic null tool_use input for parameterless calls (#636) - Fix temperature rejection for reasoning models (#640)