Add Gen3 (MAIN 40 / MLO 48) gRPC support#169
Add Gen3 (MAIN 40 / MLO 48) gRPC support#169Griswoldlabs wants to merge 4 commits intoSpanPanel:mainfrom
Conversation
Adds local gRPC-based support for Gen3 Span panels alongside existing Gen2 REST support. Gen2 code is completely untouched — Gen3 activates only via auto-detection in the config flow. Architecture: - New gen3/ subdirectory with isolated Gen3 code path - SpanGrpcClient: connects to port 50065, raw protobuf parsing - SpanGen3Coordinator: wraps push-based streaming in DataUpdateCoordinator - Config flow auto-detects Gen2 vs Gen3 (REST → gRPC fallback) - Gen3 panels require no authentication Entities: - Main feed: power, voltage, current, frequency sensors - Per-circuit: power, voltage, current sensors + breaker binary sensor - Device hierarchy: panel → circuit sub-devices Not included (future PRs): - Circuit relay control via gRPC UpdateState RPC - Energy accumulation (Gen3 gRPC doesn't provide this yet) - Solar sensor combining Closes SpanPanel#96 Relates to SpanPanel#98 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The METRIC_IID_OFFSET was hardcoded to 27, which only worked for panels where trait 26 (metrics) IIDs start at 28. On panels with different numbering, this caused names to pair with wrong power readings. Now dynamically discovers both trait 16 (name) and trait 26 (metric) instance IDs during setup and pairs them by sorted position, making the mapping work regardless of the panel's IID numbering scheme. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Thanks for testing this @cecilkootz! You're right — there was a bug in the circuit mapping. Root cause: The mapping between circuit names (trait 16) and power metrics (trait 26) used a hardcoded offset of 27 between the two instance ID spaces. This was reverse-engineered from one specific MAIN 40 panel where trait 16 IIDs are 1-25 and trait 26 IIDs are 28-52. On your panel the numbering is likely different, so names were getting paired with the wrong power readings. Fix (just pushed): Instead of assuming a fixed offset, the integration now discovers both trait 16 and trait 26 instance IDs during setup, sorts them, and pairs them by position. This should work correctly regardless of how your panel numbers its instances. Could you try the updated branch and let me know if the mapping is correct now? If you're still seeing mismatches, enabling debug logging for |
Yeah. Let me get the update pushed and try again. Sorry for the original comment removal. I have a MLO48 and wanted to debug further before raising an issue. |
|
This PR worked for me except I have two Span panels. But after I used AI coding tools, I was able to create a workaround. When you get a chance, it'd be great to add support for more than one gen3 panel. |
|
@Griswoldlabs I have taken the liberty of refactoring to accommodate your grpc support both in the span-panel-api and the span repo. I can only confirm the gen2 is not broken but hopefully your PR is faithfully represented as well. See the handoff and let me know how I can facilitate any further integration. There are planning docs in docs/dev for each repo that inform as to what changed and why. Once you are able to test and satisfied with your panel, we can publish a beta and invite others to use it. An org invite will be forthcoming. The credit is all yours. I have yet to update the readme's but the change logs are updated. Simulation mode only works for gen2 at present, but no big deal. Thanks on behalf of the community. |
Support for reading and exposing the physical breaker slot position from Gen3 panels. It introduces a new "Panel Position" sensor that displays the breaker position (1-48) for each circuit, and refactors the circuit discovery logic to properly resolve breaker group information.
|
This is incredible — thank you for taking this on so quickly. The architecture looks exactly right: gRPC transport in the library, capabilities-based entity loading, push vs poll auto-selection. I'll pull both branches and test against my MAIN 40 this week. The main thing I want to verify is that the refactored decoders produce the same readings I validated against the Span app. The big open item is the name/metric IID count mismatch on @cecilkootz's MLO 48 (31 names vs 36 metrics). My best theory is that 240V dual-phase circuits report two metric IIDs (one per breaker position) but share a single trait 16 name. Trait 15 (Breaker Groups) likely holds the mapping between physical breaker positions and named circuits — I'll look into using that to properly correlate them. Unfortunately I can only test single-phase dedup logic against my MAIN 40, so @cecilkootz's help will be essential for validating the MLO 48 fix. Looking forward to the org invite. Happy to own the gRPC side going forward. |
|
Glad it's working! Multi-panel support should be doable — the config flow already creates a separate config entry per host, so in theory each panel gets its own coordinator and entity set. The issue is likely unique_id collisions or the config flow not allowing a second Gen3 entry. Can you share what specifically broke with two panels? (e.g., did the second panel fail to add, or did entities from both panels merge together?) |
I pushed up a PR to your branch with a working version. I did some inspection and think the discovery works (at least on MLO48). Jumped a bit with the other changes going on so may need more work. Thank you for working through this and getting a working solution for Gen3. |
The _parse_breaker_group method already distinguishes field 11 (single-pole) from field 13 (dual-pole) but wasn't propagating that info. Now returns is_dual_phase and sets it on CircuitInfo. Tested on MAIN 40: correctly identifies Furnace, Electric dryer, Water heater, and Electric range as 240V dual-phase circuits. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Breaker Group Mapping — Validated on Both MAIN 40 and MLO 48 ✓Great work @cecilkootz! Merged your PR and pushed one small follow-up commit to also propagate What ChangedThe original positional pairing approach (sorting trait 16 name IIDs and trait 26 metric IIDs, pairing by index) was fundamentally wrong:
Fix: Use Trait 15 (Breaker Groups) as the authoritative mapping source. Each BG instance:
This eliminates the hardcoded offset AND the positional pairing — both were fragile. ResultsMAIN 40 (my panel): 25 circuits, 4 dual-phase, all names and positions correct. cecilkootz also added a Panel Position sensor showing the physical breaker slot number for each circuit, which is great for verification and labeling. For @cayossarianI've synced the same BG-based mapping fix into the Happy to push that to |
|
@Griswoldlabs the span-panel-api branch is for all intents yours so you can submit the PR from it. I'll do another test and once we are comfortable that the span-panel-api is working as desired we can publish a span version (which also pushes to pypi). At that point we can publish a beta off the span branch, no need to merge that first since there might be feedback that we want to get into the branch before merging to main which would update docs prematurely. Anybody at that point can install the beta from HACs directly. The org invite is nearly secondary as membership simply allows you to approve other folks PR's and merge. The repo rules are set up so a contributor approval is required prior to merge. This rule is necessary now more than ever to test multiple panel versions. |
Library Update: BG mapping fix pushed to span-panel-apiJust pushed the Breaker Group mapping fix to the What changed (library side)
Integration sideThe same fix was already merged into the integration fork via @cecilkootz's PR + dual-phase follow-up commit ( Deployed & validatedDeployed to a live MAIN 40 panel running Home Assistant — 25 circuits discovered, 4 dual-phase correctly identified, all names and power readings match the Span app. Panel position sensors showing physical slot numbers. Also validated by @cecilkootz on MLO 48 (31 circuits, 10 dual-phase) — all correct. |
|
During some more testing tonight I got an alert that HA disk/cpu had an abnormal increase over a sustained period of time. Thinking it perhaps was the unary stream from the channel updating 1/s, I re-used the scan_interval from Gen2 to "throttle" updates. Now pushes to HA are configured with the scan_interval to avoid excessive DB writes. I don't believe data loss will occur as the gRPC client always holds the latest readings. Data loss may occur on disconnect or restart but it's limited to whatever was received in the last interval period. In my case i left the default of 15s. After doing this I noticed CPU dropped back to normal as did disk IO. This may be a paranoid over reaction. If this was useful, any thoughts on if it should be a new config workflow for Gen3? As a side, the spikes I was randomly seeing have all but disappeared. Going to let this run over night and see if anything strange shows up. |
|
@cecilkootz Good catch on the CPU/disk spike from the 1/s push stream. That's definitely something we need to address before beta. The gRPC I think this should be a config flow option for Gen3 with a sensible default (15s to match Gen2). The coordinator can buffer the latest readings from the stream and only push to HA on the interval tick. That way the gRPC client always has fresh data internally (for instant response to manual polls) but HA's recorder isn't overwhelmed. I'll work this into the library-side coordinator when I test @cayossarian's refactored branches this week. @haggerty23 Re: multi-panel — the config flow already creates a separate config entry per host, so each panel should get its own coordinator and entity set. The issue is likely unique_id collisions or the config flow blocking a second Gen3 entry. Can you share what specifically broke? (Did the second panel fail during setup, or did entities from both panels merge/conflict?) |

Gen3 (MAIN 40 / MLO 48) gRPC Support
Adds local gRPC-based support for Gen3 Span panels alongside existing Gen2 REST support. Gen2 code is completely untouched — Gen3 activates only via auto-detection in the config flow.
What this does
Architecture
gen3/subdirectory with isolated Gen3 code pathSpanGrpcClient: raw protobuf parsing on port 50065 (no generated stubs needed)SpanGen3Coordinator: wraps push-based streaming in HA'sDataUpdateCoordinatorpatternspan-panel-grpcpackage laterConfig Flow Detection Logic
What's included
What's NOT included (future PRs)
UpdateStateRPCModified files (minimal, surgical changes)
const.pyCONF_PANEL_GENconstantmanifest.jsongrpcio>=1.60.0dependency, bump to v1.4.0config_flow.py_test_gen3_connection()helper__init__.pysensor.pybinary_sensor.pyNew files (all under
gen3/)__init__.pyconst.pyspan_grpc_client.pycoordinator.pyDataUpdateCoordinatorwrapper for push-based streamingsensors.pybinary_sensors.pyspan.protosetTesting
Related
Background
We (GriswoldLabs) reverse-engineered the Gen3 gRPC protocol and built a standalone integration that's been running successfully. Per Discussion #168, @cayossarian offered org admin access for a PR with a pluggable architecture. This PR implements that as an additive, non-breaking change with Gen3 code fully isolated in its own subdirectory.
🤖 Generated with Claude Code