diff --git a/.gitignore b/.gitignore
index 4e299d3..7c5ead3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -16,3 +16,4 @@ env/
# OS/editor noise
.DS_Store
Thumbs.db
+*.pyc
diff --git a/README.md b/README.md
index 3b7d950..9878959 100644
--- a/README.md
+++ b/README.md
@@ -1,223 +1,112 @@
-# OpenClaw Integration for Home Assistant
+# OpenClaw Integration for Home Assistant (Fork)
-## [Join our Discord Server!](https://discord.gg/xeHeKu9jYp)
-
-
-
-_If you want to install OpenClaw as Add-On/App directly on your Home Assistant instance take a look here:_ https://github.com/techartdev/OpenClawHomeAssistant
-
-
-OpenClaw is a Home Assistant custom integration that connects your HA instance to the OpenClaw assistant backend and provides:
-
-- A native conversation agent for Assist
-- A Lovelace chat card with session history
-- Service and event APIs for automations
-- Optional voice mode in the card
-
----
-
-## What it includes
-
-- **Conversation agent** (`openclaw`) in Assist / Voice Assistants
-- **Lovelace chat card** (`custom:openclaw-chat-card`) with:
- - message history restore,
- - typing indicator,
- - optional voice input,
- - wake-word handling for continuous mode
-- **Services**
- - `openclaw.send_message`
- - `openclaw.clear_history`
- - `openclaw.invoke_tool`
-- **Event**
- - `openclaw_message_received`
- - `openclaw_tool_invoked`
-- **Sensors / status entities** for model and connection state
- - Includes tool telemetry sensors (`Last Tool`, `Last Tool Status`, `Last Tool Duration`, `Last Tool Invoked`)
-
----
-
-## Requirements
-
-- Home Assistant Core `2025.1.0+` (declared minimum)
-- An **OpenClaw gateway** with `enable_openai_api` enabled — either:
- - The [OpenClaw Assistant addon](https://github.com/techartdev/OpenClawHomeAssistant) running on the same HA instance (auto-discovery supported), **or**
- - Any standalone [OpenClaw](https://github.com/openclaw/openclaw) installation reachable over the network (manual config)
-- Supervisor is optional (used only for addon auto-discovery)
-
-> **No addon required.** If you have OpenClaw running anywhere — on a separate server, a VPS, a Docker container, or even another machine on your LAN — this integration can connect to it via the manual configuration flow.
+> **Forked from [techartdev/OpenClawHomeAssistantIntegration](https://github.com/techartdev/OpenClawHomeAssistantIntegration)** with additional features for room-aware voice responses, improved entity context, and community PR merges.
---
-## Connection modes
+## Fork Changes
-The integration supports connecting to OpenClaw in several ways:
+This fork adds the following on top of the upstream integration:
-### Local addon (auto-discovery)
-
-If the OpenClaw Assistant addon is installed on the **same** Home Assistant instance, the integration auto-discovers it:
-- Reads token from the shared filesystem
-- Detects `access_mode` and chooses the correct port automatically
-- No manual config needed — just click **Submit** on the confirm step
+### Room-Aware Voice Responses
+- Resolves the originating voice satellite's area from the HA device/area registry
+- Injects `[Voice command from: ]` into the system prompt so the agent knows which room you're in
+- Sends `x-openclaw-area` and `x-openclaw-device-id` headers for structured access
+- "Turn off the lights" and "what's the temperature in here?" target the correct room automatically
-> **`lan_https` mode**: The integration automatically connects to the internal gateway port (plain HTTP on loopback), bypassing the HTTPS proxy entirely. No certificate setup required.
+### Richer Entity Context
+- Entity context now includes area assignments, useful state attributes (brightness, temperature, volume, media info), and current date/time
+- Significantly improves device control accuracy for LLM-based agents
-### Remote or standalone OpenClaw instance (manual config)
+### Merged Community PRs
+- **PR #9** (dalehamel) -- opt-in debug logging for API request tracing
+- **PR #10** (dalehamel) -- sticky sessions and agent routing fix (resolves upstream Issue #8)
+- **PR #11** (L0rz) -- `continue_conversation` for Voice PE follow-up dialog
-You can connect to **any reachable OpenClaw gateway** — whether it's the HA addon on another machine, a standalone `openclaw` install on a VPS, or a Docker container on your LAN. The integration doesn't care how OpenClaw is installed; it only needs the `/v1/chat/completions` endpoint.
-
-**Prerequisites on the OpenClaw instance:**
-
-1. The OpenAI-compatible API must be **enabled**:
- - **Addon users**: Set `enable_openai_api: true` in addon settings
- - **Standalone users**: Set `gateway.http.endpoints.chatCompletions.enabled: true` in `openclaw.json`, or run:
- ```sh
- openclaw config set gateway.http.endpoints.chatCompletions.enabled true
- ```
-2. The gateway must be **network-reachable** from your HA instance (not bound to loopback only)
-3. You need the **gateway auth token**:
- ```sh
- openclaw config get gateway.auth.token
- ```
-
-**Setup steps:**
-
-1. Go to **Settings → Devices & Services → Add Integration → OpenClaw**
-2. Auto-discovery will fail (no local addon) — you'll see the **Manual Configuration** form
-3. Fill in:
- - **Gateway Host**: IP or hostname of the remote machine (e.g. `192.168.1.50`)
- - **Gateway Port**: The gateway port (default `18789`)
- - **Gateway Token**: Auth token from the remote `openclaw.json`
- - **Use SSL (HTTPS)**: Check if connecting to an HTTPS endpoint
- - **Verify SSL certificate**: Uncheck for self-signed certificates (e.g. `lan_https` mode)
-
-### Common remote scenarios
-
-| Remote access mode | Host | Port | Use SSL | Verify SSL | Notes |
-|---|---|---|---|---|---|
-| Standalone OpenClaw (plain HTTP on LAN) | Remote IP | 18789 | ❌ | — | Default `openclaw gateway run` config |
-| `lan_https` (addon built-in HTTPS proxy) | Remote IP | 18789 | ✅ | ❌ | Self-signed cert; disable verification |
-| Behind reverse proxy (NPM/Caddy with Let's Encrypt) | Domain or IP | 443 | ✅ | ✅ | Trusted cert from a real CA |
-| Plain HTTP addon on LAN | Remote IP | 18789 | ❌ | — | Addon `bind_mode` must be `lan` |
-| Tailscale | Tailscale IP | 18789 | ❌ | — | Encrypted tunnel; plain HTTP is fine |
-
-> **Security note**: Avoid exposing plain HTTP gateways to the public internet. Use `lan_https`, a reverse proxy with TLS, or Tailscale for remote access.
+### Code Quality
+- Shared utility module (`utils.py`) -- extracted duplicated methods
+- Granular error codes -- `FAILED_TO_HANDLE` for connection/auth errors instead of `UNKNOWN`
+- API client retry logic for transient connection failures
+- Improved session management logging
---
-## Installation
+## Installation via HACS
-### Option A: HACS (recommended)
-
-1. Open **HACS → Integrations**
-2. Click the **3 dots (⋮)** menu in the top-right
-3. Select **Custom repositories**
-4. Add repository URL: `https://github.com/techartdev/OpenClawHomeAssistantIntegration`
-5. Category: **Integration**
-6. Click **Add**
-7. Go back to **Explore & Download Repositories**
-8. Search for **OpenClaw** and install
-9. Restart Home Assistant
-10. Open **Settings → Devices & Services → Add Integration**
-11. Add **OpenClaw**
-
-### Option B: Manual
-
-1. Copy `custom_components/openclaw` into your HA config directory:
-
- ```
- config/custom_components/openclaw
- ```
-
-2. Restart Home Assistant
-3. Add **OpenClaw** from **Settings → Devices & Services**
+1. Open **HACS -> Integrations**
+2. Click the **three-dot menu** -> **Custom repositories**
+3. Add repository URL: `https://github.com/DarrenBenson/OpenClawHomeAssistantIntegration`
+4. Category: **Integration**
+5. Click **Add**, then **Download**
+6. Restart Home Assistant
+7. Go to **Settings -> Devices & Services -> Add Integration -> OpenClaw**
---
-## Dashboard card
-
-The card is registered automatically by the integration.
+## What It Includes
-The card header shows live gateway state (`Online` / `Offline`) using existing OpenClaw status entities.
-
-```yaml
-type: custom:openclaw-chat-card
-title: OpenClaw Chat
-height: 500px
-show_timestamps: true
-show_voice_button: true
-show_clear_button: true
-session_id: default
-```
-
-Minimal config:
-
-```yaml
-type: custom:openclaw-chat-card
-```
+- **Conversation agent** (`openclaw`) in Assist / Voice Assistants
+- **Lovelace chat card** (`custom:openclaw-chat-card`) with message history, typing indicator, optional voice input, wake-word handling
+- **Services:** `openclaw.send_message`, `openclaw.clear_history`, `openclaw.invoke_tool`
+- **Events:** `openclaw_message_received`, `openclaw_tool_invoked`
+- **Sensors / status entities** for model and connection state, including tool telemetry
---
-## Assist entity exposure context
-
-OpenClaw can include Home Assistant entity context based on Assist exposure.
-
-Configure exposure in:
+## Requirements
-**Settings → Voice assistants → Expose**
+- Home Assistant Core `2025.1.0+`
+- An **OpenClaw gateway** with `enable_openai_api` enabled -- either:
+ - The [OpenClaw Assistant addon](https://github.com/techartdev/OpenClawHomeAssistant) running on the same HA instance, **or**
+ - Any standalone [OpenClaw](https://github.com/openclaw/openclaw) installation reachable over the network
-Only entities exposed there are included when this feature is enabled.
+> **No addon required.** If you have OpenClaw running anywhere -- on a separate server, a VPS, a Docker container, or another machine on your LAN -- this integration can connect to it via the manual configuration flow.
---
-## Integration options
-
-Open **Settings → Devices & Services → OpenClaw → Configure**.
-
-### Context options
+## Connection Modes
-- **Include exposed entities context**
-- **Max context characters**
-- **Context strategy**
- - `truncate`: keep the first part up to max length
- - `clear`: remove context when it exceeds max length
-
-### Tool call option
-
-- **Enable tool calls**
+### Local addon (auto-discovery)
-When enabled, OpenClaw tool-call responses can execute Home Assistant services.
+If the OpenClaw Assistant addon is installed on the **same** Home Assistant instance, the integration auto-discovers it -- no manual config needed.
-### Voice options
+### Remote or standalone OpenClaw instance (manual config)
-- **Wake word enabled**
-- **Wake word** (default: `hey openclaw`)
-- **Voice input provider** (`browser` or `assist_stt`)
+Connect to **any reachable OpenClaw gateway**. You need:
-### Voice provider usage
+1. `enable_openai_api` enabled on the OpenClaw instance
+2. Network reachability from HA
+3. The gateway auth token (`openclaw config get gateway.auth.token`)
-- **`browser`**
- - Uses browser Web Speech recognition.
- - Supports manual mic and continuous voice mode (wake word flow).
- - Best when browser STT is stable in your environment.
+| Scenario | Host | Port | SSL | Verify SSL |
+|---|---|---|---|---|
+| Standalone (LAN) | Remote IP | 18789 | No | -- |
+| `lan_https` (addon HTTPS proxy) | Remote IP | 18789 | Yes | No |
+| Reverse proxy (Let's Encrypt) | Domain | 443 | Yes | Yes |
+| Tailscale | Tailscale IP | 18789 | No | -- |
-- **`assist_stt`**
- - Uses Home Assistant STT provider via `/api/stt/`.
- - Intended for manual mic input (press mic, speak, auto-stop, transcribe, send).
- - If continuous Voice Mode is enabled while this provider is selected, the card uses browser speech for continuous listening.
+---
-For `assist_stt`, make sure an STT engine is configured in **Settings → Voice assistants**.
+## Integration Options
----
+Open **Settings -> Devices & Services -> OpenClaw -> Configure**.
-## Browser voice note (important)
+### Context
+- **Include exposed entities context** -- sends entity states to the agent
+- **Max context characters** -- limit context size
+- **Context strategy** -- `truncate` or `clear` when exceeding max length
-Card voice input uses browser speech recognition APIs (`SpeechRecognition` / `webkitSpeechRecognition`).
+### Agent Routing
+- **Agent ID** -- default OpenClaw agent (e.g. `main`)
+- **Voice agent ID** -- agent for voice pipeline requests (e.g. `voice`)
+- **Assist session ID override** -- fixed session key for voice (e.g. `ha-voice-assist`)
-- Behavior depends on browser support and provider availability
-- In Brave, repeated `network` errors can occur even with mic permission
-- The card now detects repeated backend failures and stops endless retries with a clear status message
+### Debug
+- **Debug logging** -- log agent ID, session ID, and area for each request
-If voice is unreliable in Brave, use Chrome/Edge for card voice input or continue with typed chat.
+### Voice (Lovelace card)
+- **Wake word enabled/word** -- for continuous voice mode in the card
+- **Voice input provider** -- `browser` (Web Speech) or `assist_stt` (HA STT)
---
@@ -225,16 +114,6 @@ If voice is unreliable in Brave, use Chrome/Edge for card voice input or continu
### `openclaw.send_message`
-Send a message to OpenClaw.
-
-Fields:
-
-- `message` (required)
-- `session_id` (optional)
-- `attachments` (optional)
-
-Example:
-
```yaml
service: openclaw.send_message
data:
@@ -244,14 +123,6 @@ data:
### `openclaw.clear_history`
-Clear stored conversation history for a session.
-
-Fields:
-
-- `session_id` (optional; defaults to `default` session)
-
-Example:
-
```yaml
service: openclaw.clear_history
data:
@@ -260,20 +131,6 @@ data:
### `openclaw.invoke_tool`
-Invoke a single OpenClaw Gateway tool directly.
-
-Fields:
-
-- `tool` (required)
-- `action` (optional)
-- `args` (optional object)
-- `session_key` (optional)
-- `dry_run` (optional)
-- `message_channel` (optional)
-- `account_id` (optional)
-
-Example:
-
```yaml
service: openclaw.invoke_tool
data:
@@ -285,20 +142,10 @@ data:
---
-## Event
+## Events
### `openclaw_message_received`
-Fired when OpenClaw returns a response.
-
-Event data includes:
-
-- `message`
-- `session_id`
-- `timestamp`
-
-Automation example:
-
```yaml
trigger:
- platform: event
@@ -311,19 +158,6 @@ action:
### `openclaw_tool_invoked`
-Fired when `openclaw.invoke_tool` completes.
-
-Event data includes:
-
-- `tool`
-- `ok`
-- `result`
-- `error`
-- `duration_ms`
-- `timestamp`
-
-Automation example:
-
```yaml
trigger:
- platform: event
@@ -339,63 +173,35 @@ action:
---
-## Troubleshooting
+## Dashboard Card
-### Card does not appear
+Registered automatically by the integration.
-- Restart Home Assistant after updating
-- Hard refresh browser cache
-- Confirm Integration is loaded in **Settings → Devices & Services**
-
-### Voice button is active but no transcript is sent
-
-- Check browser mic permission for your HA URL
-- Confirm **Voice input provider** setting in integration options:
- - `browser` for Web Speech recognition
- - `assist_stt` for Home Assistant STT transcription
-- For `browser`: open browser console for `OpenClaw: Speech recognition error`; repeated `network` usually means browser speech backend failure
-- For `assist_stt`: check network calls to `/api/stt/` and verify Home Assistant Voice/STT provider is configured
-
-### Tool sensors show `Unknown`
-
-- `Last Tool*` sensors stay `Unknown` until at least one `openclaw.invoke_tool` service call has executed.
-- `Session Count` remains `0` if gateway policy blocks `sessions_list` for `/tools/invoke`.
-
-### Responses do not appear after sending
-
-- Verify `openclaw_message_received` is being fired in Developer Tools → Events
-- Confirm session IDs match between card and service calls
-
-### "400 Bad Request — plain HTTP request was sent to HTTPS port"
-
-- The gateway is running in `lan_https` mode (built-in HTTPS proxy)
-- **Local addon**: Remove and re-add the integration — auto-discovery now detects `lan_https` and uses the correct internal port automatically
-- **Remote connection**: Enable **Use SSL (HTTPS)** and disable **Verify SSL certificate** in the manual config
+```yaml
+type: custom:openclaw-chat-card
+title: OpenClaw Chat
+height: 500px
+show_timestamps: true
+show_voice_button: true
+show_clear_button: true
+session_id: default
+```
---
-## Development notes
-
-- Main card source is:
-
- ```
- custom_components/openclaw/www/openclaw-chat-card.js
- ```
+## Troubleshooting
-- Root `www/openclaw-chat-card.js` is a loader shim that imports the packaged card script.
+- **Card doesn't appear:** Restart HA, hard refresh browser cache
+- **Voice not working:** Check browser mic permissions and voice provider setting
+- **Tool sensors show Unknown:** Normal until first `openclaw.invoke_tool` call
+- **400 Bad Request (HTTPS):** Enable "Use SSL" and disable "Verify SSL" for `lan_https` mode
---
-## Star History
+## Upstream
-[](https://www.star-history.com/#techartdev/OpenClawHomeAssistantIntegration&type=date&legend=top-left)
+This fork tracks [techartdev/OpenClawHomeAssistantIntegration](https://github.com/techartdev/OpenClawHomeAssistantIntegration). Upstream PRs are merged when compatible.
-## License
+## Licence
MIT. See [LICENSE](LICENSE).
-
-## Support / Donations
-
-If you find this useful and you want to bring me a coffee to make more nice stuff, or support the project, use the link below:
-- https://revolut.me/vanyo6dhw
-
diff --git a/custom_components/openclaw/__init__.py b/custom_components/openclaw/__init__.py
index 7e4d656..bff38a7 100644
--- a/custom_components/openclaw/__init__.py
+++ b/custom_components/openclaw/__init__.py
@@ -29,6 +29,7 @@
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from .api import OpenClawApiClient, OpenClawApiError
+from .utils import extract_text_recursive, normalize_optional_text
from .const import (
ATTR_AGENT_ID,
ATTR_ATTACHMENTS,
@@ -66,6 +67,7 @@
CONF_BROWSER_VOICE_LANGUAGE,
CONF_VOICE_PROVIDER,
CONF_THINKING_TIMEOUT,
+ CONF_DEBUG_LOGGING,
CONTEXT_STRATEGY_TRUNCATE,
DEFAULT_AGENT_ID,
DEFAULT_VOICE_AGENT_ID,
@@ -79,6 +81,7 @@
DEFAULT_BROWSER_VOICE_LANGUAGE,
DEFAULT_VOICE_PROVIDER,
DEFAULT_THINKING_TIMEOUT,
+ DEFAULT_DEBUG_LOGGING,
DOMAIN,
EVENT_MESSAGE_RECEIVED,
EVENT_TOOL_INVOKED,
@@ -162,6 +165,7 @@ async def async_setup_entry(hass: HomeAssistant, entry: OpenClawConfigEntry) ->
verify_ssl=verify_ssl,
session=session,
agent_id=agent_id,
+ debug_logging=entry.options.get(CONF_DEBUG_LOGGING, DEFAULT_DEBUG_LOGGING),
)
coordinator = OpenClawCoordinator(hass, client)
@@ -399,18 +403,12 @@ async def _async_add_lovelace_resource(hass: HomeAssistant, url: str) -> bool:
def _async_register_services(hass: HomeAssistant) -> None:
"""Register openclaw.send_message and openclaw.clear_history services."""
- def _normalize_optional_text(value: Any) -> str | None:
- if not isinstance(value, str):
- return None
- cleaned = value.strip()
- return cleaned or None
-
async def handle_send_message(call: ServiceCall) -> None:
"""Handle the openclaw.send_message service call."""
message: str = call.data[ATTR_MESSAGE]
source: str | None = call.data.get(ATTR_SOURCE)
session_id: str = call.data.get(ATTR_SESSION_ID) or "default"
- call_agent_id = _normalize_optional_text(call.data.get(ATTR_AGENT_ID))
+ call_agent_id = normalize_optional_text(call.data.get(ATTR_AGENT_ID))
extra_headers = _VOICE_REQUEST_HEADERS if source == "voice" else None
entry_data = _get_first_entry_data(hass)
@@ -421,7 +419,7 @@ async def handle_send_message(call: ServiceCall) -> None:
client: OpenClawApiClient = entry_data["client"]
coordinator: OpenClawCoordinator = entry_data["coordinator"]
options = _get_entry_options(hass, entry_data)
- voice_agent_id = _normalize_optional_text(
+ voice_agent_id = normalize_optional_text(
options.get(CONF_VOICE_AGENT_ID, DEFAULT_VOICE_AGENT_ID)
)
resolved_agent_id = call_agent_id
@@ -635,53 +633,6 @@ def _get_entry_options(hass: HomeAssistant, entry_data: dict[str, Any]) -> dict[
return latest_entry.options if latest_entry else {}
-def _extract_text_recursive(value: Any, depth: int = 0) -> str | None:
- """Recursively extract assistant text from nested response payloads."""
- if depth > 8:
- return None
-
- if isinstance(value, str):
- text = value.strip()
- return text or None
-
- if isinstance(value, list):
- parts: list[str] = []
- for item in value:
- extracted = _extract_text_recursive(item, depth + 1)
- if extracted:
- parts.append(extracted)
- if parts:
- return "\n".join(parts)
- return None
-
- if isinstance(value, dict):
- priority_keys = (
- "output_text",
- "text",
- "content",
- "message",
- "response",
- "answer",
- "choices",
- "output",
- "delta",
- )
-
- for key in priority_keys:
- if key not in value:
- continue
- extracted = _extract_text_recursive(value.get(key), depth + 1)
- if extracted:
- return extracted
-
- for nested_value in value.values():
- extracted = _extract_text_recursive(nested_value, depth + 1)
- if extracted:
- return extracted
-
- return None
-
-
def _summarize_tool_result(value: Any, max_len: int = 240) -> str | None:
"""Return compact string preview of tool result payload."""
if value is None:
@@ -703,7 +654,7 @@ def _summarize_tool_result(value: Any, max_len: int = 240) -> str | None:
def _extract_assistant_message(response: dict[str, Any]) -> str | None:
"""Extract assistant text from modern/legacy OpenAI-compatible responses."""
- return _extract_text_recursive(response)
+ return extract_text_recursive(response)
def _extract_tool_calls(response: dict[str, Any]) -> list[dict[str, Any]]:
@@ -880,6 +831,10 @@ def websocket_get_settings(
CONF_THINKING_TIMEOUT,
DEFAULT_THINKING_TIMEOUT,
),
+ CONF_DEBUG_LOGGING: options.get(
+ CONF_DEBUG_LOGGING,
+ DEFAULT_DEBUG_LOGGING,
+ ),
"language": hass.config.language,
},
)
diff --git a/custom_components/openclaw/__pycache__/api.cpython-312.pyc b/custom_components/openclaw/__pycache__/api.cpython-312.pyc
index ab2066d..144c32a 100644
Binary files a/custom_components/openclaw/__pycache__/api.cpython-312.pyc and b/custom_components/openclaw/__pycache__/api.cpython-312.pyc differ
diff --git a/custom_components/openclaw/__pycache__/config_flow.cpython-312.pyc b/custom_components/openclaw/__pycache__/config_flow.cpython-312.pyc
index 943f45b..18ec489 100644
Binary files a/custom_components/openclaw/__pycache__/config_flow.cpython-312.pyc and b/custom_components/openclaw/__pycache__/config_flow.cpython-312.pyc differ
diff --git a/custom_components/openclaw/api.py b/custom_components/openclaw/api.py
index 65f4c69..ee8e5e8 100644
--- a/custom_components/openclaw/api.py
+++ b/custom_components/openclaw/api.py
@@ -23,6 +23,10 @@
# Timeout for streaming chat completions (long-running)
STREAM_TIMEOUT = aiohttp.ClientTimeout(total=300, sock_read=120)
+# Retry config for transient connection failures
+_MAX_RETRIES = 2
+_RETRY_DELAY = 1.0 # seconds
+
class OpenClawApiError(Exception):
"""Base exception for OpenClaw API errors."""
@@ -52,6 +56,7 @@ def __init__(
verify_ssl: bool = True,
session: aiohttp.ClientSession | None = None,
agent_id: str = "main",
+ debug_logging: bool = False,
) -> None:
"""Initialize the API client.
@@ -71,6 +76,7 @@ def __init__(
self._verify_ssl = verify_ssl
self._session = session
self._agent_id = agent_id
+ self._debug_logging = debug_logging
self._base_url = f"{'https' if use_ssl else 'http'}://{host}:{port}"
# ssl=False disables cert verification for self-signed certs;
# ssl=None uses default verification.
@@ -85,6 +91,20 @@ def update_token(self, token: str) -> None:
"""Update the authentication token (e.g., after addon restart)."""
self._token = token
+
+ def _log_request(self, label: str, agent_id: str | None, model: str | None, session_id: str | None, headers: dict[str, str]) -> None:
+ if not self._debug_logging:
+ return
+ safe_headers = {k: ("" if k.lower() == "authorization" else v) for k, v in headers.items()}
+ _LOGGER.warning(
+ "OpenClaw API %s: agent=%s model=%s session=%s headers=%s",
+ label,
+ agent_id or self._agent_id or "main",
+ model,
+ session_id,
+ safe_headers,
+ )
+
def _headers(
self,
agent_id: str | None = None,
@@ -102,8 +122,21 @@ def _headers(
return headers
async def _get_session(self) -> aiohttp.ClientSession:
- """Get or create an aiohttp session."""
- if self._session is None or self._session.closed:
+ """Get the aiohttp session.
+
+ Prefers the HA-managed session passed in the constructor.
+ Falls back to creating a new session only if none was provided.
+ """
+ if self._session is not None and not self._session.closed:
+ return self._session
+ if self._session is None:
+ # No session was provided at init; create one as last resort
+ _LOGGER.debug("Creating fallback aiohttp session (no HA session provided)")
+ self._session = aiohttp.ClientSession()
+ else:
+ # Session was provided but is now closed; this shouldn't happen
+ # with HA-managed sessions, but handle gracefully
+ _LOGGER.warning("HA-managed aiohttp session was closed unexpectedly, creating replacement")
self._session = aiohttp.ClientSession()
return self._session
@@ -227,6 +260,8 @@ async def async_send_message(
payload["user"] = session_id
if model:
payload["model"] = model
+ elif agent_id:
+ payload["model"] = f"openclaw:{agent_id}"
# Pass session_id as a custom header or param if supported by gateway
headers = self._headers(agent_id=agent_id, extra_headers=extra_headers)
@@ -237,6 +272,8 @@ async def async_send_message(
session = await self._get_session()
url = f"{self._base_url}{API_CHAT_COMPLETIONS}"
+ self._log_request("chat", agent_id, payload.get("model"), session_id, headers)
+
try:
async with session.post(
url,
@@ -257,6 +294,21 @@ async def async_send_message(
f"Cannot connect to OpenClaw gateway: {err}"
) from err
+ async def async_send_message_with_retry(self, **kwargs: Any) -> dict[str, Any]:
+ """Send a message with automatic retry on transient connection failures."""
+ last_err: Exception | None = None
+ for attempt in range(_MAX_RETRIES + 1):
+ try:
+ return await self.async_send_message(**kwargs)
+ except OpenClawConnectionError as err:
+ last_err = err
+ if attempt < _MAX_RETRIES:
+ _LOGGER.debug("Connection failed (attempt %d/%d), retrying in %ss", attempt + 1, _MAX_RETRIES + 1, _RETRY_DELAY)
+ await asyncio.sleep(_RETRY_DELAY)
+ except OpenClawAuthError:
+ raise # Don't retry auth errors
+ raise last_err # type: ignore[misc]
+
async def async_stream_message(
self,
message: str,
@@ -294,6 +346,8 @@ async def async_stream_message(
payload["user"] = session_id
if model:
payload["model"] = model
+ elif agent_id:
+ payload["model"] = f"openclaw:{agent_id}"
headers = self._headers(agent_id=agent_id, extra_headers=extra_headers)
if session_id:
@@ -303,6 +357,8 @@ async def async_stream_message(
session = await self._get_session()
url = f"{self._base_url}{API_CHAT_COMPLETIONS}"
+ self._log_request("chat", agent_id, payload.get("model"), session_id, headers)
+
try:
async with session.post(
url,
diff --git a/custom_components/openclaw/config_flow.py b/custom_components/openclaw/config_flow.py
index 6cb7a3d..daa4ce6 100644
--- a/custom_components/openclaw/config_flow.py
+++ b/custom_components/openclaw/config_flow.py
@@ -37,7 +37,6 @@
ADDON_SLUG_FRAGMENTS,
CONF_ADDON_CONFIG_PATH,
CONF_AGENT_ID,
- CONF_ASSIST_SESSION_ID,
CONF_GATEWAY_HOST,
CONF_GATEWAY_PORT,
CONF_GATEWAY_TOKEN,
@@ -54,11 +53,11 @@
CONF_BROWSER_VOICE_LANGUAGE,
CONF_VOICE_PROVIDER,
CONF_THINKING_TIMEOUT,
+ CONF_DEBUG_LOGGING,
BROWSER_VOICE_LANGUAGES,
CONTEXT_STRATEGY_CLEAR,
CONTEXT_STRATEGY_TRUNCATE,
DEFAULT_AGENT_ID,
- DEFAULT_ASSIST_SESSION_ID,
DEFAULT_GATEWAY_HOST,
DEFAULT_GATEWAY_PORT,
DEFAULT_CONTEXT_MAX_CHARS,
@@ -71,6 +70,7 @@
DEFAULT_BROWSER_VOICE_LANGUAGE,
DEFAULT_VOICE_PROVIDER,
DEFAULT_THINKING_TIMEOUT,
+ DEFAULT_DEBUG_LOGGING,
DEFAULT_VOICE_AGENT_ID,
DOMAIN,
OPENCLAW_CONFIG_REL_PATH,
@@ -82,7 +82,9 @@
# ── Filesystem helpers ────────────────────────────────────────────────────────
def _find_addon_config_dir() -> Path | None:
- """Scan /addon_configs/ for the OpenClaw addon directory.
+ """Scan /addon_configs/ for the OpenClaw addon directory (blocking I/O).
+
+ Must be called via hass.async_add_executor_job() from async code.
The Supervisor prepends a repository-specific hash to the addon slug:
/addon_configs/_/
@@ -477,13 +479,6 @@ async def async_step_init(
DEFAULT_VOICE_AGENT_ID,
),
): str,
- vol.Optional(
- CONF_ASSIST_SESSION_ID,
- default=options.get(
- CONF_ASSIST_SESSION_ID,
- DEFAULT_ASSIST_SESSION_ID,
- ),
- ): str,
vol.Optional(
CONF_INCLUDE_EXPOSED_CONTEXT,
default=options.get(
@@ -537,6 +532,13 @@ async def async_step_init(
CONF_VOICE_PROVIDER,
default=selected_provider,
): vol.In(["browser", "assist_stt"]),
+ vol.Optional(
+ CONF_DEBUG_LOGGING,
+ default=options.get(
+ CONF_DEBUG_LOGGING,
+ DEFAULT_DEBUG_LOGGING,
+ ),
+ ): bool,
vol.Optional(
CONF_THINKING_TIMEOUT,
default=options.get(
diff --git a/custom_components/openclaw/const.py b/custom_components/openclaw/const.py
index b96a69c..bff29de 100644
--- a/custom_components/openclaw/const.py
+++ b/custom_components/openclaw/const.py
@@ -10,6 +10,7 @@
ADDON_CONFIGS_ROOT = "/addon_configs"
ADDON_SLUG_FRAGMENTS = ("openclaw_assistant", "openclaw")
OPENCLAW_CONFIG_REL_PATH = ".openclaw/openclaw.json"
+ASSIST_SESSION_STORE_KEY = "openclaw_assist_sessions"
# Defaults
DEFAULT_GATEWAY_HOST = "127.0.0.1"
@@ -38,6 +39,7 @@
CONF_VOICE_PROVIDER = "voice_provider"
CONF_BROWSER_VOICE_LANGUAGE = "browser_voice_language"
CONF_THINKING_TIMEOUT = "thinking_timeout"
+CONF_DEBUG_LOGGING = "debug_logging"
DEFAULT_AGENT_ID = "main"
DEFAULT_VOICE_AGENT_ID = ""
@@ -52,6 +54,7 @@
DEFAULT_VOICE_PROVIDER = "browser"
DEFAULT_BROWSER_VOICE_LANGUAGE = "auto"
DEFAULT_THINKING_TIMEOUT = 120
+DEFAULT_DEBUG_LOGGING = False
BROWSER_VOICE_LANGUAGES: tuple[str, ...] = (
"auto",
@@ -110,6 +113,8 @@
DATA_LAST_TOOL_INVOKED_AT = "last_tool_invoked_at"
DATA_LAST_TOOL_ERROR = "last_tool_error"
DATA_LAST_TOOL_RESULT_PREVIEW = "last_tool_result_preview"
+DATA_ASSIST_SESSIONS = "assist_sessions"
+DATA_ASSIST_SESSION_STORE = "assist_session_store"
# Platforms
PLATFORMS = ["sensor", "binary_sensor", "conversation", "event", "button", "select"]
diff --git a/custom_components/openclaw/conversation.py b/custom_components/openclaw/conversation.py
index 6baa134..b3c911d 100644
--- a/custom_components/openclaw/conversation.py
+++ b/custom_components/openclaw/conversation.py
@@ -7,36 +7,46 @@
from __future__ import annotations
from datetime import datetime, timezone
+from uuid import uuid4
import logging
+import re
from typing import Any
from homeassistant.components import conversation
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
+from homeassistant.helpers import area_registry as ar, device_registry as dr
from homeassistant.helpers.entity_platform import AddEntitiesCallback
+from homeassistant.helpers.storage import Store
from homeassistant.helpers import intent
-from .api import OpenClawApiClient, OpenClawApiError
+from .api import OpenClawApiClient, OpenClawApiError, OpenClawConnectionError, OpenClawAuthError
from .const import (
ATTR_MESSAGE,
ATTR_MODEL,
ATTR_SESSION_ID,
ATTR_TIMESTAMP,
- CONF_ASSIST_SESSION_ID,
+ CONF_AGENT_ID,
CONF_CONTEXT_MAX_CHARS,
CONF_CONTEXT_STRATEGY,
+ CONF_DEBUG_LOGGING,
CONF_INCLUDE_EXPOSED_CONTEXT,
CONF_VOICE_AGENT_ID,
- DEFAULT_ASSIST_SESSION_ID,
+ DEFAULT_AGENT_ID,
DEFAULT_CONTEXT_MAX_CHARS,
DEFAULT_CONTEXT_STRATEGY,
+ DEFAULT_DEBUG_LOGGING,
DEFAULT_INCLUDE_EXPOSED_CONTEXT,
DATA_MODEL,
DOMAIN,
EVENT_MESSAGE_RECEIVED,
+ DATA_ASSIST_SESSIONS,
+ DATA_ASSIST_SESSION_STORE,
+ ASSIST_SESSION_STORE_KEY,
)
from .coordinator import OpenClawCoordinator
from .exposure import apply_context_policy, build_exposed_entities_context
+from .utils import extract_text_recursive, normalize_optional_text
_LOGGER = logging.getLogger(__name__)
@@ -52,6 +62,13 @@ async def async_setup_entry(
async_add_entities: AddEntitiesCallback,
) -> None:
"""Set up the OpenClaw conversation agent."""
+ # Load persisted assist sessions
+ store = Store(hass, 1, ASSIST_SESSION_STORE_KEY)
+ stored = await store.async_load() or {}
+ hass.data.setdefault(DOMAIN, {})
+ hass.data[DOMAIN][DATA_ASSIST_SESSIONS] = stored
+ hass.data[DOMAIN][DATA_ASSIST_SESSION_STORE] = store
+
agent = OpenClawConversationAgent(hass, entry)
conversation.async_set_agent(hass, entry, agent)
@@ -115,12 +132,19 @@ async def async_process(
coordinator: OpenClawCoordinator = entry_data["coordinator"]
message = user_input.text
- conversation_id = self._resolve_conversation_id(user_input)
assistant_id = "conversation"
options = self.entry.options
- voice_agent_id = self._normalize_optional_text(
+ voice_agent_id = normalize_optional_text(
options.get(CONF_VOICE_AGENT_ID)
)
+ configured_agent_id = normalize_optional_text(
+ options.get(
+ CONF_AGENT_ID,
+ self.entry.data.get(CONF_AGENT_ID, DEFAULT_AGENT_ID),
+ )
+ )
+ resolved_agent_id = voice_agent_id or configured_agent_id
+ conversation_id = self._resolve_conversation_id(user_input, resolved_agent_id)
include_context = options.get(
CONF_INCLUDE_EXPOSED_CONTEXT,
DEFAULT_INCLUDE_EXPOSED_CONTEXT,
@@ -138,20 +162,48 @@ async def async_process(
)
exposed_context = apply_context_policy(raw_context, max_chars, strategy)
extra_system_prompt = getattr(user_input, "extra_system_prompt", None)
+
+ # Resolve the originating device's area for room-aware responses
+ device_area_context = self._resolve_device_area(user_input)
+
system_prompt = "\n\n".join(
- part for part in (exposed_context, extra_system_prompt) if part
+ part
+ for part in (device_area_context, exposed_context, extra_system_prompt)
+ if part
) or None
+ # Add device/area headers when available
+ device_id = getattr(user_input, "device_id", None)
+ if device_id or device_area_context:
+ voice_headers = dict(_VOICE_REQUEST_HEADERS)
+ if device_id:
+ voice_headers["x-openclaw-device-id"] = device_id
+ if device_area_context:
+ area_name = device_area_context.removeprefix("[Voice command from: ").removesuffix("]")
+ voice_headers["x-openclaw-area"] = area_name
+ else:
+ voice_headers = None
+
+ if options.get(CONF_DEBUG_LOGGING, DEFAULT_DEBUG_LOGGING):
+ _LOGGER.info(
+ "OpenClaw Assist routing: agent=%s session=%s area=%s",
+ resolved_agent_id or "main",
+ conversation_id,
+ voice_headers.get("x-openclaw-area", "unknown") if voice_headers else "none",
+ )
+
try:
full_response = await self._get_response(
client,
message,
conversation_id,
- voice_agent_id,
+ resolved_agent_id,
system_prompt,
+ voice_headers,
)
except OpenClawApiError as err:
_LOGGER.error("OpenClaw conversation error: %s", err)
+ error_code = self._map_error_code(err)
# Try token refresh if we have the capability
refresh_fn = entry_data.get("refresh_token")
@@ -163,23 +215,27 @@ async def async_process(
client,
message,
conversation_id,
- voice_agent_id,
+ resolved_agent_id,
system_prompt,
+ voice_headers,
)
except OpenClawApiError as retry_err:
return self._error_result(
user_input,
f"Error communicating with OpenClaw: {retry_err}",
+ self._map_error_code(retry_err),
)
else:
return self._error_result(
user_input,
f"Error communicating with OpenClaw: {err}",
+ error_code,
)
else:
return self._error_result(
user_input,
f"Error communicating with OpenClaw: {err}",
+ error_code,
)
# Fire event so automations can react to the response
@@ -197,42 +253,58 @@ async def async_process(
intent_response = intent.IntentResponse(language=user_input.language)
intent_response.async_set_speech(full_response)
+
return conversation.ConversationResult(
response=intent_response,
conversation_id=conversation_id,
+ continue_conversation=self._should_continue(full_response),
)
- def _resolve_conversation_id(self, user_input: conversation.ConversationInput) -> str:
- """Return conversation id from HA or a stable Assist fallback session key."""
- configured_session_id = self._normalize_optional_text(
- self.entry.options.get(
- CONF_ASSIST_SESSION_ID,
- DEFAULT_ASSIST_SESSION_ID,
- )
- )
- if configured_session_id:
- return configured_session_id
+ def _resolve_conversation_id(self, user_input: conversation.ConversationInput, agent_id: str | None) -> str:
+ """Return a stable, agent-scoped session key persisted across HA restarts."""
+ domain_store = self.hass.data.setdefault(DOMAIN, {})
+ session_cache = domain_store.setdefault(DATA_ASSIST_SESSIONS, {})
+ cache_key = agent_id or "main"
+ cached_session = session_cache.get(cache_key)
+ if cached_session:
+ return cached_session
- if user_input.conversation_id:
- return user_input.conversation_id
+ new_session = f"agent:{cache_key}:assist_{uuid4().hex[:12]}"
+ session_cache[cache_key] = new_session
- context = getattr(user_input, "context", None)
- user_id = getattr(context, "user_id", None)
- if user_id:
- return f"assist_user_{user_id}"
+ store = domain_store.get(DATA_ASSIST_SESSION_STORE)
+ if store:
+ self.hass.async_create_task(store.async_save(session_cache))
- device_id = getattr(user_input, "device_id", None)
- if device_id:
- return f"assist_device_{device_id}"
+ return new_session
- return "assist_default"
+ def _resolve_device_area(
+ self, user_input: conversation.ConversationInput
+ ) -> str | None:
+ """Resolve the area name for the device that initiated the conversation.
- def _normalize_optional_text(self, value: Any) -> str | None:
- """Return a stripped string or None for blank values."""
- if not isinstance(value, str):
+ Returns a short context string like '[Voice command from: Study]'
+ so the agent knows which room the user is in.
+ """
+ device_id = getattr(user_input, "device_id", None)
+ if not device_id:
+ return None
+
+ try:
+ dev_reg = dr.async_get(self.hass)
+ device_entry = dev_reg.async_get(device_id)
+ if not device_entry or not device_entry.area_id:
+ return None
+
+ area_reg = ar.async_get(self.hass)
+ area_entry = area_reg.async_get_area(device_entry.area_id)
+ if not area_entry:
+ return None
+
+ return f"[Voice command from: {area_entry.name}]"
+ except Exception:
+ _LOGGER.debug("Could not resolve area for device %s", device_id)
return None
- cleaned = value.strip()
- return cleaned or None
async def _get_response(
self,
@@ -241,16 +313,21 @@ async def _get_response(
conversation_id: str,
agent_id: str | None = None,
system_prompt: str | None = None,
+ extra_headers: dict[str, str] | None = None,
) -> str:
"""Get a response from OpenClaw, trying streaming first."""
+ headers = extra_headers or _VOICE_REQUEST_HEADERS
+ model_override = f"openclaw:{agent_id}" if agent_id else None
+
# Try streaming (lower TTFB for voice pipeline)
full_response = ""
async for chunk in client.async_stream_message(
message=message,
session_id=conversation_id,
+ model=model_override,
system_prompt=system_prompt,
agent_id=agent_id,
- extra_headers=_VOICE_REQUEST_HEADERS,
+ extra_headers=headers,
):
full_response += chunk
@@ -261,68 +338,78 @@ async def _get_response(
response = await client.async_send_message(
message=message,
session_id=conversation_id,
+ model=model_override,
system_prompt=system_prompt,
agent_id=agent_id,
- extra_headers=_VOICE_REQUEST_HEADERS,
+ extra_headers=headers,
)
- extracted = self._extract_text_recursive(response)
+ extracted = extract_text_recursive(response)
return extracted or ""
- def _extract_text_recursive(self, value: Any, depth: int = 0) -> str | None:
- """Recursively extract assistant text from nested response payloads."""
- if depth > 8:
- return None
-
- if isinstance(value, str):
- text = value.strip()
- return text or None
-
- if isinstance(value, list):
- parts: list[str] = []
- for item in value:
- extracted = self._extract_text_recursive(item, depth + 1)
- if extracted:
- parts.append(extracted)
- if parts:
- return "\n".join(parts)
- return None
+ @staticmethod
+ def _should_continue(response: str) -> bool:
+ """Determine if the conversation should continue after this response.
- if isinstance(value, dict):
- priority_keys = (
- "output_text",
- "text",
- "content",
- "message",
- "response",
- "answer",
- "choices",
- "output",
- "delta",
- )
+ Returns True when the assistant's reply ends with a question or
+ an explicit prompt for follow-up, so that Voice PE and other
+ satellites automatically re-listen without requiring a wake word.
- for key in priority_keys:
- if key not in value:
- continue
- extracted = self._extract_text_recursive(value.get(key), depth + 1)
- if extracted:
- return extracted
+ The heuristic checks for:
+ - Trailing question marks (including after closing quotes/parens)
+ - Common conversational follow-up patterns in English and German
+ """
+ if not response:
+ return False
+
+ text = response.strip()
+
+ # Check if the response ends with a question mark
+ # (allow trailing punctuation like quotes, parens, or emoji)
+ if re.search(r"\?\s*[\"'\u201c\u201d\u00bb)\]]*\s*$", text):
+ return True
+
+ # Common follow-up patterns (EN + DE)
+ lower = text.lower()
+ follow_up_patterns = (
+ "what do you think",
+ "would you like",
+ "do you want",
+ "shall i",
+ "should i",
+ "can i help",
+ "anything else",
+ "let me know",
+ "was meinst du",
+ "möchtest du",
+ "willst du",
+ "soll ich",
+ "kann ich",
+ "noch etwas",
+ "sonst noch",
+ )
+ for pattern in follow_up_patterns:
+ if pattern in lower:
+ return True
- for nested_value in value.values():
- extracted = self._extract_text_recursive(nested_value, depth + 1)
- if extracted:
- return extracted
+ return False
- return None
+ @staticmethod
+ def _map_error_code(err: OpenClawApiError) -> intent.IntentResponseErrorCode:
+ """Map OpenClaw exceptions to HA intent error codes."""
+ if isinstance(err, (OpenClawConnectionError, OpenClawAuthError)):
+ return intent.IntentResponseErrorCode.FAILED_TO_HANDLE
+ return intent.IntentResponseErrorCode.UNKNOWN
def _error_result(
self,
user_input: conversation.ConversationInput,
error_message: str,
+ error_code: intent.IntentResponseErrorCode = intent.IntentResponseErrorCode.UNKNOWN,
) -> conversation.ConversationResult:
"""Build an error ConversationResult."""
intent_response = intent.IntentResponse(language=user_input.language)
intent_response.async_set_error(
- intent.IntentResponseErrorCode.UNKNOWN,
+ error_code,
error_message,
)
return conversation.ConversationResult(
diff --git a/custom_components/openclaw/exposure.py b/custom_components/openclaw/exposure.py
index 8fd694e..ecc9505 100644
--- a/custom_components/openclaw/exposure.py
+++ b/custom_components/openclaw/exposure.py
@@ -3,9 +3,20 @@
from __future__ import annotations
from collections import Counter
+from datetime import datetime
from homeassistant.components.homeassistant import async_should_expose
from homeassistant.core import HomeAssistant
+from homeassistant.helpers import area_registry as ar, entity_registry as er
+from homeassistant.util import dt as dt_util
+
+# Attributes worth including in entity context (keeps prompt compact)
+_USEFUL_ATTRIBUTES = frozenset({
+ "brightness", "color_temp", "color_mode", "hvac_mode", "hvac_action",
+ "temperature", "current_temperature", "target_temp_high", "target_temp_low",
+ "battery_level", "battery", "media_title", "media_artist", "source",
+ "volume_level", "is_volume_muted", "preset_mode", "fan_mode",
+})
def build_exposed_entities_context(
@@ -16,6 +27,7 @@ def build_exposed_entities_context(
"""Build a compact prompt block of entities exposed to an assistant.
Uses Home Assistant's built-in expose rules (Settings -> Voice assistants -> Expose).
+ Includes area assignments and useful state attributes for richer LLM context.
"""
assistant_id = assistant or "conversation"
@@ -34,10 +46,28 @@ def _collect_for(assistant_value: str) -> list:
if not exposed_states:
return None
+ # Build area lookup
+ ent_reg = er.async_get(hass)
+ area_reg = ar.async_get(hass)
+ area_cache: dict[str | None, str] = {}
+
+ def _get_area_name(entity_id: str) -> str | None:
+ entry = ent_reg.async_get(entity_id)
+ if not entry:
+ return None
+ area_id = entry.area_id
+ if area_id and area_id not in area_cache:
+ area_entry = area_reg.async_get_area(area_id)
+ area_cache[area_id] = area_entry.name if area_entry else ""
+ return area_cache.get(area_id) or None
+
exposed_states.sort(key=lambda state: state.entity_id)
domain_counts = Counter(state.domain for state in exposed_states)
+ now = dt_util.now()
lines: list[str] = [
+ f"Current date and time: {now.strftime('%A %d %B %Y, %H:%M %Z')}",
+ "",
"Home Assistant live context (entities exposed to this assistant):",
f"- total_exposed_entities: {len(exposed_states)}",
"- domain_counts:",
@@ -47,9 +77,25 @@ def _collect_for(assistant_value: str) -> list:
for state in exposed_states[:max_entities]:
friendly_name = state.name or state.entity_id
- lines.append(
- f" - id: {state.entity_id}; name: {friendly_name}; state: {state.state}"
- )
+ area_name = _get_area_name(state.entity_id)
+ parts = [
+ f"id: {state.entity_id}",
+ f"name: {friendly_name}",
+ f"state: {state.state}",
+ ]
+ if area_name:
+ parts.append(f"area: {area_name}")
+
+ # Include useful attributes (skip empty/None)
+ useful_attrs = {
+ k: v for k, v in state.attributes.items()
+ if k in _USEFUL_ATTRIBUTES and v is not None
+ }
+ if useful_attrs:
+ attrs_str = ", ".join(f"{k}={v}" for k, v in useful_attrs.items())
+ parts.append(f"attrs: {attrs_str}")
+
+ lines.append(f" - {'; '.join(parts)}")
if len(exposed_states) > max_entities:
lines.append(
diff --git a/custom_components/openclaw/strings.json b/custom_components/openclaw/strings.json
index 69909f9..f25fbfb 100644
--- a/custom_components/openclaw/strings.json
+++ b/custom_components/openclaw/strings.json
@@ -40,7 +40,6 @@
"data": {
"agent_id": "Agent ID (e.g. main)",
"voice_agent_id": "Voice agent ID (optional)",
- "assist_session_id": "Assist session ID override (optional)",
"include_exposed_context": "Include exposed entities context",
"context_max_chars": "Max context characters",
"context_strategy": "When context exceeds max",
diff --git a/custom_components/openclaw/translations/en.json b/custom_components/openclaw/translations/en.json
index b14609f..b247a3f 100644
--- a/custom_components/openclaw/translations/en.json
+++ b/custom_components/openclaw/translations/en.json
@@ -42,7 +42,6 @@
"data": {
"agent_id": "Agent ID (e.g. main)",
"voice_agent_id": "Voice agent ID (optional)",
- "assist_session_id": "Assist session ID override (optional)",
"include_exposed_context": "Include exposed entities context",
"context_max_chars": "Max context characters",
"context_strategy": "When context exceeds max",
@@ -52,7 +51,8 @@
"allow_brave_webspeech": "Allow Web Speech in Brave (experimental)",
"voice_provider": "Voice input provider (browser or HA STT)",
"browser_voice_language": "Browser voice language",
- "thinking_timeout": "Response timeout (seconds)"
+ "thinking_timeout": "Response timeout (seconds)",
+ "debug_logging": "Enable debug logging"
}
}
}
diff --git a/custom_components/openclaw/utils.py b/custom_components/openclaw/utils.py
new file mode 100644
index 0000000..57a1742
--- /dev/null
+++ b/custom_components/openclaw/utils.py
@@ -0,0 +1,60 @@
+"""Shared utility functions for the OpenClaw integration."""
+
+from __future__ import annotations
+
+from typing import Any
+
+
+def normalize_optional_text(value: Any) -> str | None:
+ """Return a stripped string or None for blank values."""
+ if not isinstance(value, str):
+ return None
+ cleaned = value.strip()
+ return cleaned or None
+
+
+def extract_text_recursive(value: Any, depth: int = 0) -> str | None:
+ """Recursively extract assistant text from nested response payloads."""
+ if depth > 8:
+ return None
+
+ if isinstance(value, str):
+ text = value.strip()
+ return text or None
+
+ if isinstance(value, list):
+ parts: list[str] = []
+ for item in value:
+ extracted = extract_text_recursive(item, depth + 1)
+ if extracted:
+ parts.append(extracted)
+ if parts:
+ return "\n".join(parts)
+ return None
+
+ if isinstance(value, dict):
+ priority_keys = (
+ "output_text",
+ "text",
+ "content",
+ "message",
+ "response",
+ "answer",
+ "choices",
+ "output",
+ "delta",
+ )
+
+ for key in priority_keys:
+ if key not in value:
+ continue
+ extracted = extract_text_recursive(value.get(key), depth + 1)
+ if extracted:
+ return extracted
+
+ for nested_value in value.values():
+ extracted = extract_text_recursive(nested_value, depth + 1)
+ if extracted:
+ return extracted
+
+ return None