This plan reflects the actual implementation status as of release 0.1.34.
| # | Goal | Priority | Status |
|---|---|---|---|
| 1 | Stable HA integration with addon auto-discovery | P0 | ✅ Done |
| 2 | Sensors + binary sensor for gateway visibility | P0 | ✅ Done |
| 3 | Native Assist conversation agent | P0 | ✅ Done |
| 4 | Lovelace chat card (text + voice) | P0 | ✅ Done |
| 5 | Voice provider choice (browser / assist_stt) |
P0 | ✅ Done |
| 6 | Production polish and broad compatibility hardening | P1 | 🚧 In progress |
| 7 | Optional native media player/TTS routing entity | P2 | ⏳ Not started |
- Integration communicates with OpenClaw gateway over HTTP.
- Primary chat endpoint:
POST /v1/chat/completions(OpenAI-compatible). - Coordinator uses lightweight connectivity checks and model probing with graceful fallback.
- Services and conversation agent share response extraction logic for modern OpenAI-compatible payloads.
- Card sends chat requests via HA services/websocket (
openclaw.send_message). - Card receives replies through HA event subscription (
openclaw_message_received). - Backend keeps in-memory history and exposes websocket history sync (
openclaw/get_history). - Settings endpoint (
openclaw/get_settings) provides integration-level card options.
browserprovider:- Uses
SpeechRecognition/webkitSpeechRecognition. - Supports manual mic and continuous voice mode (wake-word flow).
- Uses
assist_sttprovider:- Captures local mic audio and sends to HA STT API (
/api/stt/<provider>). - Manual one-shot transcription flow.
- Provider metadata negotiation for language/sample rate/channels to reduce
415failures.
- Captures local mic audio and sends to HA STT API (
OpenClawHomeAssistantIntegration/
├── IMPLEMENTATION_PLAN.md
├── CHANGELOG.md
├── README.md
├── hacs.json
├── custom_components/
│ └── openclaw/
│ ├── __init__.py
│ ├── api.py
│ ├── binary_sensor.py
│ ├── config_flow.py
│ ├── const.py
│ ├── conversation.py
│ ├── coordinator.py
│ ├── exposure.py
│ ├── manifest.json
│ ├── sensor.py
│ ├── services.yaml
│ ├── strings.json
│ ├── translations/en.json
│ └── www/openclaw-chat-card.js
└── www/openclaw-chat-card.js
- Config flow and options flow.
- Gateway API client and coordinator.
- Sensor + binary sensor platforms.
- HACS-ready packaging.
- Conversation agent integration.
- Services:
openclaw.send_message,openclaw.clear_history. - Event emission:
openclaw_message_received. - Compatibility fix for intent response handling across HA versions.
- Auto resource registration and cleanup of duplicate/legacy card resources.
- Versioned card URL strategy to avoid stale cache issues.
- Robust response parsing for multiple OpenAI-compatible response shapes.
- Backend chat history sync to restore UI state after navigation/reload.
- Wake word and always voice mode options.
- Brave guard with explicit allow override option.
- Language normalization and preferred Assist pipeline language handling.
- Improved TTS voice/language selection.
- Multi-pending response handling to prevent stuck typing state.
assist_sttprovider with negotiated STT metadata.AudioWorkletNodecapture first, withScriptProcessorNodefallback.
- Prompt/context behavior options.
- Tool-call execution toggle.
- Wake word + always voice mode.
- Brave web speech override.
- Voice provider selector (
browserorassist_stt).
- Can consume integration settings from websocket settings endpoint.
- Optional card config overrides remain available.
- Validate
assist_sttagainst multiple HA STT providers/languages. - Improve surfaced error details when
/api/stt/<provider>returns non-200. - Add optional manual recording duration setting for
assist_stt. - Add provider-specific status text for easier user troubleshooting.
- Continue hardening for older/newer HA Core API differences.
- Add broader runtime checks for changed Assist pipeline payload shapes.
- Expand fallback behavior when pipeline metadata is unavailable.
- Evaluate media-player based TTS routing entity.
- Explore optional continuous flow for HA STT/TTS pipeline mode.
- Add automated tests around settings websocket payload and card voice state transitions.
- Bump and sync versions in:
custom_components/openclaw/manifest.jsoncustom_components/openclaw/__init__.py(_CARD_URL)www/openclaw-chat-card.jsloader shim
- Restart HA and hard-refresh dashboard.
- Confirm browser console reports the expected card version.
- Validate both providers:
browser: manual + continuous modeassist_stt: manual transcription flow
- Update
CHANGELOG.mdandREADME.mdfor any new option/behavior.
- Home Assistant Core:
2025.1.0+(declared HACS minimum; chosen conservatively for broad compatibility). - Browser: modern Chromium/Firefox/Safari for card UI; voice capability depends on browser APIs and permissions.