Skip to content

Latest commit

 

History

History
408 lines (308 loc) · 59.1 KB

File metadata and controls

408 lines (308 loc) · 59.1 KB

API Reference

Base URL: http://localhost:6052

WebSocket API (/ws)

The primary API. A single multiplexed WebSocket handles all 44 commands.

Protocol

Connect: ws://localhost:6052/ws

On connect, the server sends a ServerInfoMessage:

{"server_version": "0.0.0", "esphome_version": "2026.3.1", "port": 6052, "ha_addon": false, "requires_auth": false}

Send a CommandMessage:

{"command": "devices/list", "message_id": "1", "args": {}}

Receive a ResultMessage:

{"message_id": "1", "result": { ... }}

Streaming output (EventMessage):

{"message_id": "1", "event": "output", "data": "Compiling...\n"}
{"message_id": "1", "event": "result", "data": {"success": true, "code": 0}}

Error (ErrorMessage):

{"message_id": "1", "error_code": "unknown_command", "details": "..."}

Error Codes (ErrorCode)

Code Description
invalid_message Malformed JSON or missing fields
unknown_command Command not found
invalid_args Missing or invalid arguments
not_found Resource not found
internal_error Server error
not_authenticated Connection has not authenticated; only auth/login is accepted
rate_limited Too many failed login attempts from this IP

Enums

Enum Values Description
DeviceState unknown, online, offline Device connectivity state (mDNS + ping)

Commands

Authentication

Controller: AuthController

When the dashboard is started with --username/--password (or $ESPHOME_USERNAME/$ESPHOME_PASSWORD env vars), every WebSocket connection on the public port must authenticate before any other command will be accepted.

The handshake:

  1. Server sends ServerInfoMessage with requires_auth: true.
  2. Client sends auth/login (or its alias auth) with either {username, password} or a previously issued {token}.
  3. Server replies with {token, expires_at}.
  4. Subsequent commands on the same connection are accepted normally.

Tokens are opaque random strings, persisted to <config>/.device-builder-sessions.json, and auto-refresh on each use (sliding 30-day window). Frontends should store the token in localStorage and reuse it on reconnect — only fall back to the password form on not_authenticated.

Connections that arrive on the trusted ingress site (HA add-on supervisor proxy) get requires_auth: false and skip the handshake entirely.

Command Args Response Description
auth/login (alias: auth) {username, password} or {token} {token, expires_at} Authenticate this connection
auth/logout {logged_out: true} Revoke the current token; closes the connection
auth/refresh {token, expires_at} Slide the expiry forward without making another API call

Bearer header (non-browser clients). Anything that can set HTTP headers — the HA esphome-dashboard-api client, CLI tools, scripts — may pass Authorization: Bearer <token> on the WS handshake or on a REST request. The server treats that as equivalent to a successful in-band auth/login {token} call.

Basic auth (REST only). Legacy REST endpoints also accept Authorization: Basic <base64(user:pass)>. WebSocket clients can't use this because browsers don't allow setting headers on new WebSocket(...).

Rate limiting. After 10 failed login attempts from one IP within a 5-minute window, that IP is locked out for 5 minutes. A successful login clears the failure history immediately. Token-based logins (replays) are exempt — brute-forcing 256 bits of token entropy is infeasible, and rate-limiting valid replays would lock legitimate clients out after a network blip.

Devices

Models: Device, DevicesResponse

Controller: DevicesController

Command Args Response Description
devices/list DevicesResponse List configured + importable devices
devices/get_states dict Get device online/offline states
devices/create {name, board_id, config_type?, ssid?, psk?, file_content?} WizardResponse Create device from board definition
devices/update {name, friendly_name?, comment?, board_id?} UpdateDeviceResponse Update device metadata
devices/set_labels {configuration, label_ids: string[]} Device Replace this device's label assignments. Pass [] to clear. Unknown ids return INVALID_ARGS. Fires device_updated after the scanner reload.
devices/rename {configuration, new_name} Rename device via ESPHome CLI
devices/delete {configuration} Delete device and associated files
devices/delete_bulk {configurations: string[]} [{configuration, success, error?}] Delete multiple devices
devices/archive {configuration} Soft-delete: move YAML to <config_dir>/archive/, wipe build dir, wipe StorageJSON + device-metadata sidecars. Reversible via devices/unarchive (cached IP/version/hash refill from the next mDNS broadcast).
devices/archive_bulk {configurations: string[]} [{configuration, success, error?}] Archive multiple devices at once. Same per-item shape as devices/delete_bulk.
devices/unarchive {configuration} Move an archived YAML back into the active config directory. Errors with INVALID_ARGS if an active config with the same filename already exists.
devices/list_archived [{configuration, name, friendly_name, comment}] List archived devices for the dashboard's archived-devices dialog.
devices/delete_archived {configuration} Permanently delete an archived YAML and its sidecars. The companion to unarchive for "I really don't want this back".
devices/get_config {configuration} string Read device YAML config
devices/update_config {configuration, content} Write device YAML config
devices/add_component {configuration, component_id, fields?, sub_entities?} AddComponentResponse Add component to device config
devices/import {name, project_name?, package_import_url?, ...} dict Import/adopt discovered device
devices/ignore {name, ignore?} Toggle device visibility
devices/validate {configuration} Streaming Validate YAML config
devices/logs {configuration, port?: "OTA" | serial, no_states?: bool} Streaming Stream live device logs. port defaults to "OTA" (empty string is treated the same) — without a default, esphome logs falls into an interactive port-choice prompt when multiple targets are visible and the stdin-less subprocess crashes with EOFError. When port resolves to "OTA" the dashboard forwards its mDNS / DNS cache as --mdns-address-cache / --dns-address-cache so the CLI doesn't redo resolution the dashboard already has (legacy-dashboard parity with build_cache_arguments).

Device.state: DeviceStateunknown, online, or offline (discovered via mDNS + ping). Device.has_pending_changes: true = config changed since last compile, false = up to date, null = never compiled. Device.update_available: true = device was compiled with a different ESPHome version than the server.

Firmware

Models: FirmwareJob, JobStatus, JobType

Controller: FirmwareController

Command Args Response Description
firmware/compile {configuration} FirmwareJob Queue compile job
firmware/upload {configuration, port?: ""} FirmwareJob Queue upload of existing binary. port defaults to "" (no --device arg — CLI auto-detects). Also accepts "OTA", a serial path (/dev/ttyUSB0, COM3), or an explicit IP / hostname for "install to a specific address" — the address-cache shortcut is bypassed when a target is named directly.
firmware/install {configuration, port?: "OTA" | serial | ip | hostname, force_local?: bool} FirmwareJob Queue compile + upload. port defaults to "OTA" (let the CLI resolve the configured host). Same port semantics as firmware/upload for non-default values. force_local defaults to false; when true the scheduler decision is bypassed and the install runs LOCAL regardless of paired build servers — used by the install dialog's "Build locally instead" override link to opt out of REMOTE routing for a single install.
firmware/clean {configuration} FirmwareJob Queue build clean for one device
firmware/reset_build_env FirmwareJob Queue full reset of .esphome/ build dirs and PIO cache
firmware/compile_bulk {configurations: string[]} [FirmwareJob] Queue multiple compiles
firmware/install_bulk {configurations: string[], port?: "OTA" | serial | ip | hostname} [FirmwareJob] Queue multiple installs. port defaults to "OTA" and is shared across every queued job — almost always callers want that default rather than a single explicit target across the fleet. Same port validation as firmware/install.
firmware/get_jobs {status?, configuration?} [FirmwareJob] List jobs with filters
firmware/get_job {job_id} FirmwareJob Get job with full output
firmware/follow_job {job_id} Streaming Historical output + live stream for one job
firmware/follow_jobs {snapshot?: true} Streaming All jobs' lifecycle + output + progress
firmware/get_binaries {configuration} [{title, file}] List compiled firmware files
firmware/download {configuration, file, compressed?} {filename, data, size} Download binary (base64)
firmware/cancel {job_id} Cancel queued or running job
firmware/clear {status?} Remove finished jobs

Job queue: one job runs at a time, others wait. Jobs persist across server restarts. Output buffered in FirmwareJob.output — clients can reconnect via firmware/follow_job.

One active job per device: queuing a new job for a device cancels any existing queued or running job with the same configuration first. The cancelled job fires JOB_CANCELLED as usual, then the new job fires JOB_QUEUED — frontends following lifecycle events stay consistent with the "show the latest result" UX. firmware/reset_build_env is global (empty configuration) and is exempt from this rule.

History retention: terminal compile/upload/install jobs are kept in a global pool capped at 50, deduplicated to one entry per configuration (newest wins). Terminal clean/reset_build_env jobs sit in a separate pool capped at 5 so they don't crowd device history. Active (queued/running) jobs are exempt from pruning. Each retained job's output is trimmed to the last 2000 lines on terminal transition; a synthetic first line ... [output trimmed: N earlier line(s) elided] indicates how many lines were dropped. firmware/clear still wipes terminal jobs on demand.

firmware/reset_build_env: wipes .esphome/build/, .esphome/external_components/, and .esphome/platformio_cache/ so the next compile re-fetches external components and re-downloads PlatformIO toolchains. Returns a FirmwareJob with empty configuration and job_type: "reset_build_env". Streams progress through the same JOB_OUTPUT event as compile jobs. Mid-run cancellation is honoured between the three target directories, not during a single removal.

Cancel semantics:

  • Queued jobs flip to cancelled immediately.
  • Running jobs receive SIGTERM, with SIGKILL escalation after a 3 s grace period. The job's status becomes cancelled (not failed) and JOB_CANCELLED fires.

Progress: FirmwareJob.progress is an int | null 0–100 latched from the highest percentage seen in [ 17%] Compiling … (PlatformIO) or Writing at 0x… (45 %) (esptool) lines. null means the tooling hasn't emitted a percentage yet — most early compile output is opaque. The value is monotonically non-decreasing within a phase; at known phase seams (REMOTE install's compile → upload boundary) the runner explicitly resets to 0 and fires job_progress{progress: 0} so the next phase's percents aren't silently clamped against the previous phase's peak. Subscribers should render the bar from the latest event rather than asserting non-decreasing progress.

Job events (broadcast to all subscribed clients):

  • job_queued, job_started, job_output, job_progress, job_completed, job_failed, job_cancelled

firmware/follow_jobs stream events (per WebSocket subscription):

  • snapshot — initial replay of every retained job (one event per job, payload is the full FirmwareJob). Includes both active and the trimmed terminal history, so a client gets the complete picture from a single subscription with no extra firmware/get_jobs call. Skipped when snapshot: false.
  • job_queued / job_started / job_completed / job_failed / job_cancelled — full FirmwareJob payload.
  • job_output{job_id, line} (line keeps its \n or \r terminator).
  • job_progress{job_id, progress} (0–100 integer).

The subscription stays open for the connection's lifetime; closing the WebSocket cancels the stream.

Boards

Controller: BoardCatalog

Enums: Platform, Esp32Variant, BoardTag

Command Args Response Description
boards/get_boards {query?, platform?, variant?, tag?, offset?, limit?} PagedBoardsResponse Search/list boards
boards/get_board {board_id} BoardCatalogEntry Get board with pin map

BoardCatalogEntry carries two recommendation lists for the Add Component dialog:

  • featured_components: list[FeaturedComponent] — components recommended for this board, surfaced in the catalog API as featured.<board_id>.<local_id> under category featured. Each entry can override the catalog name/description and pre-fill any subset of the underlying component's config_entries via a fields map keyed by ConfigEntry.key. Three preset modes per field:
    • default: a primitive value the frontend pre-fills; user can change it.
    • locked: {value, locked: true} — frontend disables the input and devices/add_component rejects deviating user values.
    • suggestions: {suggestions: [...]} — frontend renders a picker, user must pick from the list.
  • featured_bundles: list[FeaturedBundle]{id, name, description, component_ids} groups of featured components (e.g. "Status LED" = output.gpio + light.binary). The frontend triggers sequential devices/add_component calls for each component_id when the user adds a bundle.

Components

Controller: ComponentCatalog

Enums: ComponentCategory, ConfigEntryType

Command Args Response Description
components/get_categories {board_id?} [{id, name, count}] List categories with counts
components/get_components {query?, category?, exclude_category?, platform?, board_id?, offset?, limit?} PagedComponentsResponse Search/list components
components/get_component {component_id, platform?, board_id?} ComponentCatalogEntry Get component with config entries

platform filters to components compatible with the given target platform; components with an empty supported_platforms list are platform-agnostic and always included. board_id is a convenience — the boards catalog resolves it to a platform; platform wins when both are passed. The platform is also used to materialise each entry's platform_defaults into default_value.

category / exclude_category accept either a single category or a list. Use exclude_category for the regular catalog selector to hide entries that belong to the dedicated "Add core configuration" dialog.

Featured components. The board catalog's featured_components are surfaced through this same API under the synthetic category featured and ID prefix featured.<board_id>.<local_id>. They are only returned when category explicitly includes featured and board_id is supplied — the regular catalog listing never mixes them in. get_categories adds a featured entry with the board's recommended-count when board_id is set. A featured ComponentCatalogEntry carries the board overrides baked into its config_entries: default_value reflects the preset, and the new locked: bool and suggestions: list[ConfigPrimitive] | None fields tell the frontend to disable the input or render a picker. devices/add_component recognises featured.* ids — the wire shape doesn't change, but the backend resolves the underlying component, validates user input against the locked/suggestion constraints, and merges presets before delegating to the regular merge logic.

Automations

Controller: AutomationsController

Models: AutomationTrigger, AutomationAction, AutomationCondition, LightEffect, AutomationTree, ActionNode, ConditionNode, ParsedAutomation, AutomationLocation, YamlDiff

The automations API is the structured editor's wire surface. The catalog (triggers / actions / conditions / light effects) ships pre-rendered as definitions/automations.json, emitted at sync time by script/sync_components.py. Parameter schemas come out in the same ConfigEntry[] shape the component form already speaks — the editor reuses one form pipeline.

Parsing and writing live on the backend: the frontend exchanges structured AutomationTree blobs through parse / upsert / delete, and the writer returns a YamlDiff = {fromLine, toLine, replacement} the device editor splices into the YAML pane through the existing optimistic-update path. The backend does not persist the YAML in upsert / delete — the device editor's config-write debounce handles that.

Lambda sentinel. Templatable field values can be either a literal of the field's declared type or {"_lambda": "<C++ source>"}. The writer emits the latter as a ruamel LiteralScalarString (|-style block scalar), the parser inverts.

Location discriminator. Every parsed automation carries a tagged-union AutomationLocation:

{kind: "script",        id: string}
{kind: "interval",      index: int}
{kind: "component_on",  component_id: string, trigger: string}
{kind: "device_on",     trigger: string}
{kind: "light_effect",  component_id: string, index: int}

upsert / delete consume the same shape so the writer knows the exact YAML range to splice.

Command Args Response Description
automations/get_triggers {platform?, board_id?} [AutomationTrigger] Full trigger catalog. platform / board_id are reserved for future platform gating and ignored today.
automations/get_actions {platform?, board_id?} [AutomationAction] Full action catalog (includes core control-flow: if, while, repeat, wait_until, delay, lambda).
automations/get_conditions {platform?, board_id?} [AutomationCondition] Full condition catalog (includes core combinators: and, or, all, any, not, xor, for, lambda).
automations/get_light_effects {platform?, board_id?} [LightEffect] Full light-effects catalog.
automations/get_available {configuration} {triggers, actions, conditions, scripts, devices} Scoped catalog for a single device. triggers filtered to component types present in the YAML + device-level; actions / conditions returned in full (id-pickers filter on the frontend). scripts lists declared script: ids with their parameters: map. devices lists every configured component instance with its id / name for id-picker dropdowns.
automations/parse {configuration} [ParsedAutomation] Walk the device YAML and return every recognised automation (top-level script: / interval:, device-level esphome.on_*, inline component on_*:, light effects: entries). Unknown action / condition ids raise INVALID_ARGS rather than best-effort rebuilding.
automations/upsert {configuration, automation, location} {yaml_diff: YamlDiff} Insert or replace one automation at location. Returns the splice the frontend applies in place.
automations/delete {configuration, location} {yaml_diff: YamlDiff} Remove the automation at location.

Config

Controller: ConfigController

Models: UserPreferences

Command Args Response Description
config/version {server_version, esphome_version} Get versions
config/serial_ports [{port, desc}] List serial ports
config/get_preferences UserPreferences Get user preferences
config/set_preferences {theme?, dashboard_view?, ...} UserPreferences Update preferences (partial)
config/get_secrets [string] List secret key names

Onboarding

Controller: OnboardingController

Models: OnboardingState, OnboardingStep, OnboardingStepId, OnboardingStepStatus

First-run setup tracking. Each step's status is computed from live data on every get_state call (never persisted), so the frontend's "needs attention" indicators clear the moment the user fixes the underlying state — even via a manual secrets.yaml edit. completed_version is the last onboarding-flow version the user has explicitly acknowledged; bumping ONBOARDING_VERSION (server-side constant) re-prompts users at lower versions when new steps are added.

Command Args Response Description
onboarding/get_state OnboardingState Snapshot of current vs acknowledged version + per-step pending / done status. Currently one step (wifi_credentials) — pending when secrets.yaml's wifi_ssid is missing, empty, whitespace-only, or matches the bootstrap placeholder.
onboarding/set_wifi_credentials {ssid, password?} OnboardingState Update wifi_ssid / wifi_password in secrets.yaml via a line-based rewrite that preserves standalone and inline trailing comments and other secrets. Validates against ESPHome's own length limits (32 char SSID, 64 char password) plus a control-character check; empty / whitespace-only SSID, oversize values, and control characters (other than \t) raise INVALID_ARGS. password is optional and defaults to the empty string for open networks.
onboarding/mark_acknowledged OnboardingState Record that the user has finished the current onboarding flow (sets onboarding_completed_version to ONBOARDING_VERSION). Idempotent and monotonic — never downgrades a higher stored value. Use this on save AND on explicit decline ("I don't use Wi-Fi") so the wizard stops re-popping; the per-step pending status stays accurate so the dedicated Set up Wi-Fi… kebab entry still surfaces the re-entry path until the underlying data is set.

Labels

Models: Label

Controller: LabelsController

User-defined chips (name + optional #rrggbb color) that can be assigned to devices via devices/set_labels. The catalog is global; assignments live on each device's Device.labels field as a list of label ids.

Command Args Response Description
labels/list [Label] Return every label in the global catalog
labels/create {name, color?} Label Create a label. name 1-50 chars, unique case-insensitive. color is #rrggbb (lowercased on save) or null. Server generates id. Fires label_created.
labels/update {label_id, name?, color?} Label Rename and / or recolor. Pass color: null to clear; omit color to leave it unchanged. Fires label_updated.
labels/delete {label_id} {deleted: true} Delete a label and cascade — every device entry with this id has it removed in the same transaction, then each affected device fires device_updated; finally label_deleted fires.

Renaming or recoloring a label leaves device assignments untouched — devices reference labels by id, not by name. The frontend is expected to subscribe to subscribe_events, fetch the catalog once via labels/list, then resolve ids → name + color at render time.

Remote Build

Controller: RemoteBuildController

Models: RemoteBuildSettingsView, RemoteBuildPeer, PeerSummary, IdentityView

Receiver-side surface for the remote-build offload feature (issue #106). Discovers peer dashboards via mDNS (_esphomebuilder._tcp.local.) and pairs with offloaders over the peer-link Noise WS (/remote-build/peer-link, default port 6055). Cross-subnet pair flows skip the discovery surface entirely — the pair dialog accepts a typed hostname / port directly and request_pair either succeeds or fails; there's no intermediate "save this host so I can pair it later" step. Receiver-side state lives across two files: the master enabled toggle in .device-builder.json under _remote_build, and APPROVED StoredPeer rows in their own sibling file <config_dir>/.receiver_peers.json (per-file helpers.storage.Store with debounced writes — atomic per-domain, no lock contention against unrelated metadata writers). Offloader-side pairings follow the same shape at <config_dir>/.offloader_pairings.json.

Surface map: which commands run on which side

A single device-builder process can be a receiver (accepts Noise WS connections from offloaders, lets a human admin pair them) and an offloader (initiates Noise WS connections to receivers it has pinned) at the same time. Each WS command targets one role. The frontend surfaces them on different Settings screens — "Build server" (receiver role) vs "Send builds" (offloader role). All commands run over the dashboard's main /ws endpoint and inherit whatever auth that endpoint enforces (today: none — the dashboard /ws trusts any local connection); none of these commands run over the peer-link Noise WS, which carries only intent=... frames between dashboards, never WS commands.

Command Side Notes
get_settings / set_settings receiver Master toggle for whether this dashboard accepts incoming offloader connections. Off-default; toggling requires a restart.
approve_peer / remove_peer receiver Admin manages incoming pairings. The peer list itself is delivered via the subscribe_events initial-state push and mutated locally on the frontend from remote_build_pair_request_received / remote_build_pair_status_changed events — no separate list_peers command.
(no command) both mDNS-discovered peer dashboards reach the frontend the same way: subscribe_events initial-state push under hosts, plus remote_build_host_added / remote_build_host_removed events fired from the receiver controller's mDNS browser callbacks. Cross-subnet pair flows bypass discovery — the pair dialog accepts a typed hostname / port and request_pair either succeeds or fails.
set_pairing_window receiver Frontend-driven; the Pairing requests screen calls open=true on mount + extend ticks, open=false on unmount.
get_identity / rotate_identity receiver Surfaces / rotates the dashboard's identity for OOB pin verification.
preview_pair offloader Open a brief Noise WS to capture a receiver's pin for OOB display.
request_pair offloader Send intent=pair_request. Both PENDING and APPROVED rows live in the controller's unified _pairings dict; the per-file Store debounce-saves APPROVED rows to <config_dir>/.offloader_pairings.json (PENDING is filtered out at serialise time). APPROVED result spawns no listener; PENDING result spawns a _pair_status_listener task that flips the row's status on flip.
unpair offloader Drop the row from the unified _pairings dict and schedule the debounced save. Cancels the row's listener task if any. Idempotent. Auto-clears any pending pin_mismatch / peer_revoked alert for the same (hostname, port).
Command Args Response Description
remote_build/get_settings RemoteBuildSettingsView Read the receiver-side settings (enabled, peers).
remote_build/set_settings {enabled} RemoteBuildSettingsView Persist the master switch. Strict-bool; rejects truthy strings.
remote_build/approve_peer {dashboard_id} RemoteBuildSettingsView Promote a PENDING row to APPROVED. Mutates the RAM-canonical _approved_peers dict and schedules a debounced write to <config_dir>/.receiver_peers.json via the per-file Store. Fires remote_build_pair_status_changed.
remote_build/remove_peer {dashboard_id} RemoteBuildSettingsView Drop a peer row. PENDING entries live in the controller's _pending_peers dict; APPROVED entries live in _approved_peers and are debounce-saved to .receiver_peers.json. Fires remote_build_pair_status_changed with status="removed" for either case (the event wakes any in-flight pair_status long-poll, which is needed for the PENDING case to drop the offloader's local state). not_found when neither dict has a matching row.
remote_build/set_pairing_window {open} Open / close the pairing window for the calling WS client. The window narrows when intent="pair_request" Noise frames are even accepted; refcounted across clients with auto-close timeout. Fires remote_build_pairing_window_changed on transitions.
remote_build/get_identity IdentityView Read the receiver's stable identity: {dashboard_id, pin_sha256, server_version, esphome_version, listener_bound}. pin_sha256 is the lowercase-hex SHA-256 of the X25519 peer-link public key — the same fingerprint advertised in mDNS TXT and the value peers OOB-verify during pairing. listener_bound reports whether the peer-link Noise WS listener is currently serving traffic. Idempotent (no rotation triggered).
remote_build/rotate_identity IdentityView Mint a fresh X25519 peer-link keypair, replacing whatever's on disk. The dashboard_id is preserved across rotations so the receiver-side audit trail stays readable. Every paired peer that pinned the old pin_sha256 will see a fingerprint mismatch on the next Noise handshake and need to re-pair. The peer-link listener is torn down + rebuilt as a side effect so the new key is in service immediately.
remote_build/preview_pair {hostname, port} {pin_sha256} Open a brief Noise XX WS to a receiver, capture the static pubkey, return the lowercase-hex SHA-256 for OOB display. No state mutated on either side. unavailable on transport / handshake failure.
remote_build/request_pair {hostname, port, pin_sha256, receiver_label, offloader_label} PairingSummary Re-handshake (defends against TOCTOU between preview and confirm), send intent="pair_request" carrying {label: offloader_label, dashboard_id} in encrypted msg3. The unified _pairings dict holds both PENDING and APPROVED rows; APPROVED rows debounce-save to <config_dir>/.offloader_pairings.json via the per-file Store, and PENDING rows are filtered out at serialise time so a malicious LAN scanner can't bloat the file. PENDING result spawns a pair-status listener task that flips the row's status in place + schedules a save when the receiver reports the eventual flip; APPROVED result short-circuits the inbox dance. PENDING rows don't survive a controller restart — any in-flight pair attempt has to be re-issued. precondition_failed on pin mismatch; no_pairing_window when the receiver's window is closed; unavailable on transport failure; internal_error on an unexpected receiver intent_response.
remote_build/unpair {pin_sha256} {removed: bool} Pop the row from the unified _pairings dict (keyed on the pin) and schedule the debounced save. Idempotent — removed=false when no row matched. Cancels the row's pair-status listener task and any long-lived peer-link client. Auto-clears any pending offloader alert (pin_mismatch / peer_revoked) for the same pin. The receiver-side StoredPeer is not notified; that's the receiver admin's concern (a future intent="peer_link" from this offloader will be rejected because the local row is gone).
remote_build/submit_job {pin_sha256, configuration, target} {job_id, accepted, reason?} Offloader-side: bundle configuration and dispatch a build to the receiver behind pin_sha256. Validates path-traversal via rel_path, looks up the live PeerLinkClient, spawns esphome bundle <yaml> -o <tmp.tar.gz> (subprocess, 60s timeout — see helpers.config_bundle.build_yaml_bundle), streams the gzipped tarball as submit_job_chunk frames over the open peer-link, and awaits the receiver's submit_job_ack. target is one of compile / upload. Live job lifecycle + output flow asynchronously through offloader_job_state_changed / offloader_job_output events on the global subscribe_events stream (no separate subscription channel). invalid_args on bad input or non-zero bundle exit (CLI stdout inlined); not_found on missing pairing / YAML; precondition_failed on PENDING / disconnected peer-link; unavailable on ack timeout / session loss mid-flow.
remote_build/cancel_job {pin_sha256, job_id} {sent} Offloader-side: cooperative cancel for a previously-submitted remote job. job_id is the offloader-local id returned by remote_build/submit_job. Fire-and-forget — sends a cancel_job frame over the open peer-link and returns {sent: true} if the frame made it to the wire. The receiver resolves the offloader-local id back to its FirmwareJob via the JobFanout correlation cache and routes through FirmwareController.cancel, same primitive as a local operator-driven cancel. The next offloader_job_state_changed with status="cancelled" is the confirmation — no separate ack frame. invalid_args on bad pin / empty job_id; not_found on missing pairing; precondition_failed on PENDING / disconnected peer-link.
remote_build/edit_pairing_endpoint {pin_sha256, hostname, port} PairingSummary Offloader-side: user-driven manual rebind of an existing APPROVED pairing onto new (hostname, port) coords. Fallback for the cross-subnet / no-mDNS cases the mDNS auto-rebind path can't catch. Same trust model as the auto-rebind: a one-shot peer_link_preview_pair probe verifies the new endpoint is reachable AND answers with the same pin the row was paired against. Identity-mismatch refuses the edit and leaves the stored pairing untouched (the user's existing trust is keyed on the original pin; substituting a fresh pubkey is what the re-auth wizard exists to gate). Match path mutates StoredPairing.receiver_hostname / .receiver_port in place, schedules the debounced save, cancels + respawns the PeerLinkClient against the new coords, and fires offloader_pair_endpoint_rebound. Same probe + commit primitives the auto-rebind path uses. invalid_args on bad pin / hostname / port; not_found on missing pairing or pairing replaced mid-probe (concurrent unpair / re-pair); precondition_failed on non-APPROVED status, no-op edit (new coords match current), missing offloader identity, or pin mismatch at the new endpoint; unavailable on probe transport / handshake failure.
remote_build/download_artifacts {pin_sha256, job_id} {job_id, idedata, images, total_bytes} Offloader-side: fetch the build's flash-artifact set for a previously-completed remote job. Sends download_artifacts{job_id} over the open peer-link, parks on the assembled-bytes future, then unpacks the receiver's gzipped tarball off the event loop. images is [{name, offset, size, data_b64}]firmware.bin first (offset taken from the artifacts_start header — receiver-resolved from StorageJSON.target_platform so the offloader doesn't duplicate platform-detection logic), then idedata.extra.flash_images in declared order. idedata is the parsed manifest with extra.flash_images[].path rewritten from absolute receiver-side paths to bare basenames matching the entries in images. total_bytes is the sum of every image's size (frontend progress UI). The downstream install paths (Web Serial / network OTA / download-to-disk) consume this shape directly. invalid_args on bad pin / empty job_id / malformed tarball from the receiver; not_found on missing pairing or receiver-reported unknown_job / build_dir_missing; precondition_failed on PENDING / disconnected peer-link or receiver-reported job_not_completed / duplicate_download; unavailable on session loss mid-download or receiver-reported pack_failed.

Peer-link Noise WS receiver site

A separate aiohttp web.Application binds on the dashboard's --remote-build-port (default 6055) and serves /remote-build/peer-link — a Noise_XX_25519_ChaChaPoly_SHA256 WebSocket endpoint. Default-off; binds only when RemoteBuildSettings.enabled is true. Toggling enabled requires a dashboard restart for the listener to followset_settings persists the new value but doesn't live-bind / unbind.

The Noise XX handshake exchanges static X25519 pubkeys mutually; the offloader pins the receiver's pin (out-of-band verified via intent="preview") and the receiver looks up the offloader's pin against its peers list. Post-handshake, a single transport frame carries {intent_response: ...}. One-shot intents (preview / pair_request / pair_status) close the WS after that frame; intent="peer_link" on a successful auth keeps the WS open for application messages.

Peer-link application messages (post-handshake, ride over the established Noise session as one JSON-encoded transport frame per WS message). The complete set declared by AppMessageType in controllers/remote_build/peer_link.py:

type Direction Payload Description
ping both {nonce} Encrypted heartbeat probe. Each side fires every HEARTBEAT_INTERVAL_SECONDS and expects the matching pong within HEARTBEAT_DEAD_AFTER_SECONDS. Three consecutive misses close the session with reason heartbeat_timeout.
pong both {nonce} Response to a ping. The receiver bumps its last_pong_at shared-state field; the heartbeat task watches that timestamp to decide when to call _on_dead.
terminate both {reason} Structured close frame. reason is one of superseded / server_shutting_down / heartbeat_timeout / malformed_frame / client_stopped. Sent before the WS close so the peer logs the cause; the matching close-reason flows out via RECEIVER_PEER_LINK_SESSION_CLOSED / OFFLOADER_PEER_LINK_CLOSED.
queue_status receiver → offloader {idle, running, queue_depth} Receiver pushes a fresh firmware-queue snapshot whenever the queue transitions (JOB_QUEUED / JOB_STARTED / terminal events). Fan-out covers every paired offloader's open session. The offloader's PeerLinkClient receive loop validates the wire shape, then fires OFFLOADER_QUEUE_STATUS_CHANGED.
submit_job offloader → receiver {job_id, configuration_filename, target, total_bundle_bytes, num_chunks, bundle_sha256} Header announcing a build before the bundle bytes start. target is compile / upload. The receiver pre-sizes its BundleAssembler against total_bundle_bytes + num_chunks and rejects a mismatched stream cleanly. bundle_sha256 is the lowercase-hex digest of the full bundle bytes — cheap end-to-end integrity check on top of per-frame Noise AEAD.
submit_job_chunk offloader → receiver {job_id, chunk_index, data_b64, is_last} One slice of the gzipped tarball, base64-encoded inside the JSON envelope so frames stay JSON-shaped (33 % b64 overhead deliberate — keeps the dispatch seam uniform). Chunks must arrive in monotonic order; the assembler rejects out-of-order, duplicate, or post-completion frames with a structured error that triggers terminate{reason: malformed_frame}.
submit_job_ack receiver → offloader {job_id, accepted, reason?} Receiver's response after the bundle stream completes and the SHA-256 matches. accepted=False carries a structured reason (bundle_hash_mismatch, manifest_unsupported, queue_full, etc.); reason is omitted on accept. The offloader's submit handler waits with _SUBMIT_JOB_ACK_TIMEOUT_SECONDS (60s); ack-missing raises SubmitJobTimeoutError (maps to WS unavailable). No mid-session retry — the receiver may already have queued the job.
job_state_changed receiver → offloader {job_id, status, error_message} Receiver-pushed lifecycle transitions: queued / running / completed / failed / cancelled. Fans out from the firmware controller's existing JOB_* bus events via JobFanout on the receiver, filtered to jobs whose remote_peer matches an active peer-link session. The offloader fires OFFLOADER_JOB_STATE_CHANGED and maintains the _offloader_remote_jobs RAM cache (terminal entries drop on transition).
job_output receiver → offloader {job_id, stream, line} High-rate during an active build (one frame per line of compiler / linker output). stream is stdout / stderr; line keeps its trailing terminator (\n / \r / \r\n) — carriage-return-only chunks are esptool / PlatformIO progress overwrites and stripping them would lose the renderer's append-vs-overwrite signal. The offloader fires OFFLOADER_JOB_OUTPUT per frame; no cache (live stream only).
cancel_job offloader → receiver {job_id} Cooperative cancel for a previously-submitted job. job_id is the offloader-supplied id from the original submit_job header — i.e. the value the offloader generated and the receiver stashed as FirmwareJob.remote_job_id. Receiver resolves the offloader-side id back to its receiver-local FirmwareJob via JobFanout.resolve_firmware_job_id (reverse scan over the _remote_jobs cache) and routes through FirmwareController.cancel — same primitive as a local operator-driven cancel. No ack frame: the resulting JOB_CANCELLED bus event fans out a job_state_changed{status: cancelled} which the offloader already plumbs through OFFLOADER_JOB_STATE_CHANGED. Silent drops at the receiver (debug-logged) on: malformed shape (off-contract peer), unknown correlation (race with terminal transition that already evicted the cache entry), CommandError from the firmware queue (already-terminal job — the cancel intent has already been satisfied by the natural exit).
download_artifacts offloader → receiver {job_id} Request the build's flash-artifact set for a previously-completed remote job. job_id is the offloader-supplied id from the original submit_job header (same id-space as cancel_job). Receiver resolves the id back to its FirmwareJob (linear scan over firmware._jobs; cardinality bounded by the queue's retention), refuses the request with a structured reason if the job is unknown / not COMPLETED / has already a download in flight on the same session, otherwise reads idedata.json + every flash image from the build dir and streams the bytes back as artifacts_startartifacts_chunkartifacts_end. Single-flight per session — a second download_artifacts while the first is still streaming gets duplicate_download.
artifacts_start receiver → offloader {job_id, total_bytes, num_chunks, artifacts_sha256, firmware_offset} Header announcing the gzipped-tar stream that follows. The offloader pre-sizes its BundleAssembler against total_bytes + num_chunks (capped at FIRMWARE_MAX_TOTAL_BYTES = 16 MiB), validates each subsequent chunk against these bounds, and recomputes artifacts_sha256 after assembly to catch chunk-reordering bugs in our own framing (per-frame Noise AEAD already covers wire confidentiality + authentication). firmware_offset is the lowercase-hex flash offset for the firmware.bin partition (e.g. "0x10000" on ESP32, "0x0" on ESP8266 / libretiny / RP2040), resolved on the receiver from StorageJSON.target_platform so the offloader doesn't duplicate platform-detection logic. The remaining flash-image offsets ride inside idedata.json in the tarball.
artifacts_chunk receiver → offloader {job_id, chunk_index, data_b64, is_last} One slice of the gzipped tarball, base64-encoded inside the JSON envelope (same shape as submit_job_chunk — keeps the dispatch seam uniform; BundleAssembler is reused with max_total_bytes=FIRMWARE_MAX_TOTAL_BYTES). Chunks must arrive in monotonic order; the assembler rejects out-of-order / duplicate / post-completion frames with a structured DownloadArtifactsError that resolves the offloader's parked future.
artifacts_end receiver → offloader {job_id, accepted, reason?} Stream terminator. accepted=true: assembler finalises (validates count + SHA-256), the parked download_artifacts() future resolves to DownloadArtifactsResult(tarball, firmware_offset). accepted=false: carries a structured reasonunknown_job, job_not_completed, duplicate_download, build_dir_missing, pack_failed — the offloader's WS layer maps these to not_found / precondition_failed / unavailable CommandError codes. (Protocol violations like a malformed download_artifacts frame skip the soft-reject path entirely and terminate the session with MALFORMED_FRAME.) Tarball layout: idedata.json first (the upstream-canonical flash-image manifest, with extra.flash_images[].path carrying receiver-absolute paths the offloader rewrites to basenames at unpack time), then firmware.bin, then every extra.flash_images entry flattened to its basename.

All application messages flow over the same Noise_XX_25519_ChaChaPoly_SHA256 transport — the Noise cipher state is single-direction so a per-channel send_lock (in PeerLinkChannel.send_frame) serialises concurrent encrypts. The dispatch loop on each side branches on the type discriminator; unknown types are debug-logged and dropped (forward-compatibility).

The receiver advertises the listener's port over mDNS as a TXT property:

TXT property Value When present
server_version "1.2.3" always
esphome_version "2026.5.0" always
pin_sha256 lowercase-hex SHA-256 of the X25519 peer-link pubkey when the peer-link listener is bound
remote_build_port stringified int (e.g. "6055") when the peer-link listener is bound (same condition as pin_sha256)

Same-subnet peers read remote_build_port from TXT so a --remote-build-port override is auto-discovered. Cross-subnet peers type the port into the pair dialog (it's an arg on request_pair).

Utility

Command Args Response Description
ping {pong: true} Health check
subscribe_events Streaming Subscribe to real-time events

subscribe_events initial state:

Right after a client subscribes (and before any live events arrive), the server pushes one initial_state event carrying a snapshot of state that's accumulated server-side via background activity (mDNS browser, completed pair flows, etc.) so the frontend can paint the first frame without follow-up reads. Shape: {devices?: [...], importable?: [...], pairings?: [PairingSummary], peers?: [PeerSummary], hosts?: [RemoteBuildPeer], offloader_alerts?: [OffloaderAlertSnapshotEntry], peer_queue_status?: [PeerQueueStatusSnapshotEntry], remote_jobs?: [OffloaderRemoteJobSnapshotEntry]}. Each field is present only when the corresponding controller is up; pairings carries both PENDING and APPROVED offloader-side rows from the _pairings dict, peers carries both PENDING (_pending_peers) and APPROVED (_approved_peers) receiver-side rows, hosts carries the receiver controller's mDNS-discovered peer dashboards (self._peers, RAM-only — never persisted), offloader_alerts carries the offloader-side pair alerts dict (_offloader_alerts, RAM-only) so a tab subscribing AFTER a pin_mismatch / peer_revoked event fired still renders the alert it would have missed on the live stream — the alert only clears via re-pair or unpair, never by an operator-driven dismiss, because the underlying state (broken pairing) doesn't fix itself. peer_queue_status carries the most recent queue_status snapshot per paired receiver so a late tab paints the per-peer queue depth without waiting for the next event. remote_jobs carries every offloader-submitted job that's still in flight (terminal entries drop on the matching job_state_changed event) so the UI can render running builds on page load. All sync reads, no executor hop, no disk I/O. The PeerSummary projection persists peer_ip (the source IP observed at pair_request time) on StoredPeer so a snapshot-loaded inbox row carries the same IP the live remote_build_pair_request_received event would carry; that's what the receiver Settings UI renders alongside the pin as a clone-risk sanity-check. Empty string for legacy on-disk rows from receivers that pre-date the field. Live updates that arrive after the initial state mutate against this seed via the events below.

subscribe_events events:

  • device_added, device_removed, device_updated, device_state_changed
  • importable_device_added, importable_device_removed
  • label_created, label_updated, label_deleted
  • job_queued, job_started, job_output, job_completed, job_failed
  • remote_build_pair_request_received{dashboard_id, pin_sha256, label, peer_ip, paired_at} — fires when an offloader's intent="pair_request" Noise frame lands a new PENDING row inside the receiver's open pairing window. The Settings UI surfaces the row in the inbox with the offending dashboard_id, the peer-link pin_sha256 (X25519 pubkey hash), the offloader's claimed label, peer_ip for sanity-checking, and paired_at (receiver-clock unix timestamp at row creation; matches the value the subscribe_events snapshot would show, so a subscriber building the inbox row from the event stream can sort by it directly).
  • remote_build_pair_status_changed{dashboard_id, status} (status: "approved" | "removed") — fires from three paths: (a) approve_peer promoting a _pending_peers entry to _approved_peers (status="approved"), (b) remove_peer dropping either a _pending_peers entry or an _approved_peers entry (status="removed"); APPROVED removal also schedules a debounced write to .receiver_peers.json, (c) pairing-window-close clearing _pending_peers (status="removed" per cleared entry). The "removed" event is what wakes any in-flight intent="pair_status" long-poll on a paired offloader so its listener task drops the offloader's local state. Subscribers refresh the paired-peers list without polling.
  • remote_build_pairing_window_changed{open, expires_in_seconds} — fires when the pairing window opens / closes (refcount transitions, auto-close timeout, idle ageing). expires_in_seconds is null when open is false; otherwise it's the float remaining lifetime against the latest user-activity extend. Subscribers render the "Pairing window: X seconds remaining" countdown from this value (and tick locally between events).
  • remote_build_identity_rotated{dashboard_id, pin_sha256} — fires when the operator triggers remote_build/rotate_identity. Subscribers refresh their cached pin without polling get_identity. Only fires when the on-disk rotation succeeds; the listener rebuild may still fail-soft, in which case the rotater's IdentityView response carries listener_bound=false while this event reflects only that the persistent key on disk changed.
  • remote_build_host_added{name, hostname, port, source, addresses, server_version, esphome_version} — fires whenever the receiver controller's mDNS browse callback or the async resolve-success path upserts a row in self._peers. Upsert semantics — the frontend keys its discovered-hosts list on name (the leftmost service-instance label) and replaces an existing row with the same key. The subscribe_events initial-state push carries the full current set under hosts, so a fresh tab paints without a round-trip; this event is the live-update channel that keeps the list current as dashboards come online (or refresh their TXT mid-session).
  • remote_build_host_removed{name} — fires when zeroconf delivers a Removed callback (TTL expiry without renewal, or an explicit goodbye). name matches the corresponding remote_build_host_added event's name field, so subscribers drop the row by key.
  • offloader_pair_status_changed{receiver_hostname, receiver_port, status: "approved" | "removed"} — offloader-side counterpart to remote_build_pair_status_changed. Fired by the per-row pair-status listener task (_await_pair_status_flip_apply_pair_status_result_fire_offloader_pair_status_changed) when its intent="pair_status" round-trip resolves: APPROVED + matching pin → status="approved"; APPROVED + drifted pin → status="removed" (treat receiver-side identity rotation as peer-revoked); REJECTED → status="removed". Also fired by remote_build/unpair when the user removes a row. Keys on (hostname, port) because the offloader's StoredPairing keys on the receiver coordinates the user dialled, not on a receiver-side identifier the offloader doesn't track. Delivered to clients via the existing global subscribe_events stream — no separate subscription channel.
  • offloader_pair_pin_mismatch{receiver_hostname, receiver_port, receiver_label, expected_pin, observed_pin} — fires alongside offloader_pair_status_changed status="removed" when the pair-status listener observes APPROVED + drifted pin (the receiver's static X25519 pubkey hash differs from StoredPairing.pin_sha256 recorded at pair time). Carries the diagnostic detail the status-changed event doesn't, plus the offloader-side receiver_label so the alert can name the row even after the pairings list has dropped it. Frontend's 4b-4 alert plumbing reshape uses the distinct event to surface a "re-pair to confirm the new identity" CTA, separate from the peer-revocation case.
  • offloader_pair_peer_revoked{receiver_hostname, receiver_port, receiver_label} — fires alongside offloader_pair_status_changed status="removed" when the pair-status listener gets IntentResponse.REJECTED (admin clicked Reject, pairing window closed, offloader's identity rotated, or row never existed on the receiver). Frontend uses this for the "the receiver removed us; reach out if this was a mistake" alert distinct from a pin-mismatch alert (different operator response — pin-mismatch can be re-paired right away, peer-revoked needs receiver-side admin coordination).
  • offloader_pair_alert_dismissed{receiver_hostname, receiver_port} — fires when an entry leaves the controller's RAM-only _offloader_alerts dict via one of the two resolution paths: a successful request_pair against the same (hostname, port) (re-pair fixed the broken state), or unpair removed the row outright. There is no operator-driven dismiss surface; clicking "OK got it" without acting would just hide a broken pairing the next peer-link session would still fail against, so the only ways out are re-pair or unpair. The event lets other tabs / clients on the global subscribe_events stream sync their local alerts list without re-fetching the snapshot.
  • offloader_queue_status_changed{receiver_hostname, receiver_port, pin_sha256, idle, running, queue_depth} — offloader-side cache update: a paired receiver pushed a fresh queue_status snapshot over the peer-link session. Fired from the offloader's PeerLinkClient receive loop after parsing the wire frame. The remote-build controller listens, updates its RAM-only _peer_queue_status cache (keyed on pin_sha256), and the global subscribe_events stream re-broadcasts to frontend clients so the per-peer queue-depth indicator renders live without polling. subscribe_events.initial_state.peer_queue_status carries the latest cached value per pin so late tabs paint without waiting for the next event.
  • offloader_job_state_changed{receiver_hostname, receiver_port, pin_sha256, job_id, status, error_message} — offloader-side: a paired receiver pushed a job_state_changed frame for a job we submitted. status is one of queued / running / completed / failed / cancelled (mirrors the wire frame literal). The controller mirrors the event into its RAM-only _offloader_remote_jobs cache (keyed on offloader-local job_id); terminal entries drop on the matching event so the snapshot only ever carries actively-running rows. Seeded into subscribe_events.initial_state.remote_jobs. Distinct from the local job_* family because remote-driven jobs don't have a corresponding FirmwareJob row on the offloader — the receiver owns the queue state and we only see the wire reflection.
  • offloader_job_output{receiver_hostname, receiver_port, pin_sha256, job_id, stream, line} — offloader-side: a paired receiver pushed a job_output frame for a job we submitted. stream is stdout / stderr; line preserves its trailing terminator (\n / \r / \r\n — carriage-return-only chunks are esptool / PlatformIO progress overwrites, same contract the receiver-side job_output event holds). High-rate path during an active build (one frame per line of compiler / linker output); subscribers should debounce / batch downstream rendering rather than re-render per event. No RAM cache — the offloader's job snapshot tracks lifecycle state only; output bytes belong in the live event stream.

Legacy REST Endpoints (Deprecated)

For Home Assistant ESPHome integration backward compat only.

Endpoint Description
GET /devices List devices
GET /json-config?configuration=... Get parsed YAML as JSON
GET /compile (WebSocket) Compile via spawn protocol
GET /upload (WebSocket) Upload via spawn protocol