Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
0d79826
Update the CHANGELOG.md
FBumann Feb 3, 2026
3aaa5c7
Update to tsam v3.1.0 and add warnings for preserve_n_clusters=False
FBumann Feb 3, 2026
4fdc445
[ci] prepare release v6.0.0
github-actions[bot] Feb 4, 2026
26ababd
fix typo in deps
FBumann Feb 4, 2026
8524d32
fix typo in README.md
FBumann Feb 4, 2026
19ce60f
Revert citation temporarily
FBumann Feb 4, 2026
35bb383
[ci] prepare release v6.0.0
github-actions[bot] Feb 4, 2026
c5f64fa
Improve json io
FBumann Feb 4, 2026
4a57b73
fix: Notebooks using tsam
FBumann Feb 4, 2026
4bd9143
Allow manual docs dispatch
FBumann Feb 4, 2026
c14dc00
Created: tests/test_clustering/test_multiperiod_extremes.py
FBumann Feb 4, 2026
6e5082c
fix: clustering and tsam 3.1.0 issue
FBumann Feb 4, 2026
0bdc30f
[ci] prepare release v6.0.1
github-actions[bot] Feb 4, 2026
92f14b9
fix: clustering and tsam 3.1.0 issue
FBumann Feb 4, 2026
314b0fa
[ci] prepare release v6.0.1
github-actions[bot] Feb 4, 2026
6f8788e
ci: remove test
FBumann Feb 4, 2026
6ae5d03
[ci] prepare release v6.0.1
github-actions[bot] Feb 4, 2026
7e4347f
chore(deps): update dependency werkzeug to v3.1.5 (#564)
renovate[bot] Feb 4, 2026
b7a30e6
chore(deps): update dependency ruff to v0.14.14 (#563)
renovate[bot] Feb 4, 2026
2eb528f
chore(deps): update dependency netcdf4 to >=1.6.1, <1.7.5 (#583)
renovate[bot] Feb 4, 2026
5caffd1
chore(deps): update dependency pre-commit to v4.5.1 (#532)
renovate[bot] Feb 5, 2026
16eae3d
fix: Comparison coords (#599)
FBumann Feb 5, 2026
31a5964
[ci] prepare release v6.0.2
github-actions[bot] Feb 5, 2026
4a57282
typo
FBumann Feb 5, 2026
57c6fc9
Revert "typo"
FBumann Feb 5, 2026
ce318e9
Add plan file
FBumann Feb 5, 2026
ae6afb6
Add comprehensive test_math coverage for multi-period, scenarios, c…
FBumann Feb 5, 2026
efad9c9
⏺ Done. Here's a summary of what was changed:
FBumann Feb 5, 2026
78ed286
Added TestClusteringExact class with 3 tests asserting exact per-ti…
FBumann Feb 5, 2026
b4942dd
More storage tests
FBumann Feb 5, 2026
4b91731
Add multi-period tests
FBumann Feb 5, 2026
e89150b
Add clustering tests and fix issues with user set cluster weights
FBumann Feb 5, 2026
ba0f94f
Merge remote-tracking branch 'refs/remotes/origin/main' into feature/…
FBumann Feb 5, 2026
24fcd58
Update CHANGELOG.md
FBumann Feb 5, 2026
f80885b
Mark old tests as stale
FBumann Feb 5, 2026
68850eb
Update CHANGELOG.md
FBumann Feb 5, 2026
e5be97e
Mark tests as stale and move to new dir
FBumann Feb 5, 2026
fa3de4e
Move more tests to stale
FBumann Feb 5, 2026
96124b2
Change fixtures to speed up tests
FBumann Feb 5, 2026
d71f85e
Moved files into stale
FBumann Feb 5, 2026
3710435
Renamed folder
FBumann Feb 5, 2026
79c4288
Reorganize test dir
FBumann Feb 5, 2026
0eeb8ab
Reorganize test dir
FBumann Feb 5, 2026
6387a29
Rename marker
FBumann Feb 5, 2026
f73c346
2. 08d-clustering-multiperiod.ipynb (cell 29): Removed stray <cell_…
FBumann Feb 5, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .github/workflows/docs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,15 @@ on:
- 'docs/**'
- 'mkdocs.yml'
workflow_dispatch:
inputs:
deploy:
description: 'Deploy docs to GitHub Pages'
type: boolean
default: false
version:
description: 'Version to deploy (e.g., v6.0.0)'
type: string
required: false
workflow_call:
inputs:
deploy:
Expand Down
132 changes: 126 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,70 @@ If upgrading from v2.x, see the [v3.0.0 release notes](https://github.com/flixOp

Until here -->

## [6.0.0] - Upcoming
## [6.0.3] - Upcoming

**Summary**: Bugfix release fixing `cluster_weight` loss during NetCDF roundtrip for manually constructed clustered FlowSystems.

### 🐛 Fixed

- **Clustering IO**: `cluster_weight` is now preserved during NetCDF roundtrip for manually constructed clustered FlowSystems (i.e. `FlowSystem(..., clusters=..., cluster_weight=...)`). Previously, `cluster_weight` was silently dropped to `None` during `save->reload->solve`, causing incorrect objective values. Systems created via `.transform.cluster()` were not affected.

### 👷 Development

- **New `test_math/` test suite**: Comprehensive mathematical correctness tests with exact, hand-calculated assertions. Each test runs in 3 IO modes (solve, save→reload→solve, solve→save→reload) via the `optimize` fixture:
- `test_flow.py` — flow bounds, merit order, relative min/max, on/off hours
- `test_flow_invest.py` — investment sizing, fixed-size, optional invest, piecewise invest
- `test_flow_status.py` — startup costs, switch-on/off constraints, status penalties
- `test_bus.py` — bus balance, excess/shortage penalties
- `test_effects.py` — effect aggregation, periodic/temporal effects, multi-effect objectives
- `test_components.py` — SourceAndSink, converters, links, combined heat-and-power
- `test_conversion.py` — linear converter balance, multi-input/output, efficiency
- `test_piecewise.py` — piecewise-linear efficiency, segment selection
- `test_storage.py` — charge/discharge, SOC tracking, final charge state, losses
- `test_multi_period.py` — period weights, invest across periods
- `test_scenarios.py` — scenario weights, scenario-independent flows
- `test_clustering.py` — exact per-timestep flow_rates, effects, and charge_state in clustered systems (incl. non-equal cluster weights to cover IO roundtrip)
- `test_validation.py` — plausibility checks and error messages

---

## [6.0.2] - 2026-02-05

**Summary**: Patch release which improves `Comparison` coordinate handling.

### 🐛 Fixed

- **Comparison Coordinates**: Fixed `component` coordinate becoming `(case, contributor)` shaped after concatenation in `Comparison` class. Non-index coordinates are now properly merged before concat in `solution`, `inputs`, and all statistics properties. Added warning when coordinate mappings conflict (#599)

### 📝 Docs

- **Docs Workflow**: Added `workflow_dispatch` inputs for manual docs deployment with version selection (#599)

### 👷 Development

- Updated dev dependencies to newer versions
---

## [6.0.1] - 2026-02-04

**Summary**: Bugfix release addressing clustering issues with multi-period systems and ExtremeConfig.

### 🐛 Fixed

- **Multi-period clustering with ExtremeConfig** - Fixed `ValueError: cannot reshape array` when clustering multi-period or multi-scenario systems with `ExtremeConfig`. The fix uses pandas `.unstack()` instead of manual reshape for robustness.
- **Consistent cluster count validation** - Added validation to detect inconsistent cluster counts across periods/scenarios, providing clear error messages.

### 💥 Breaking Changes

- **ExtremeConfig method restriction for multi-period systems** - When using `ExtremeConfig` with multi-period or multi-scenario systems, only `method='replace'` is now allowed. Using `method='new_cluster'` or `method='append'` will raise a `ValueError`. This works around a tsam bug where these methods can produce inconsistent cluster counts across slices.

### 📦 Dependencies

- Excluded tsam 3.1.0 from compatible versions due to clustering bug.

---

## [6.0.0] - 2026-02-03

**Summary**: Major release featuring tsam v3 migration, complete rewrite of the clustering/aggregation system, 2-3x faster I/O for large systems, new `plotly` plotting accessor, FlowSystem comparison tools, and removal of deprecated v5.0 classes.

Expand Down Expand Up @@ -226,12 +289,12 @@ comp = fx.Comparison([fs_base, fs_modified])
comp = fx.Comparison([fs1, fs2, fs3], names=['baseline', 'low_cost', 'high_eff'])

# Side-by-side plots (auto-facets by 'case' dimension)
comp.statistics.plot.balance('Heat')
comp.statistics.flow_rates.plotly.line()
comp.stats.plot.balance('Heat')
comp.stats.flow_rates.plotly.line()

# Access combined data with 'case' dimension
comp.solution # xr.Dataset
comp.statistics.flow_rates # xr.Dataset
comp.stats.flow_rates # xr.Dataset

# Compute differences relative to a reference case
comp.diff() # vs first case
Expand Down Expand Up @@ -262,6 +325,58 @@ flow_system.topology.set_component_colors('turbo', overwrite=False) # Only unse

`Component.inputs`, `Component.outputs`, and `Component.flows` now use `FlowContainer` (dict-like) with dual access by index or label: `inputs[0]` or `inputs['Q_th']`.

#### `before_solve` Callback

New callback parameter for `optimize()` and `rolling_horizon()` allows adding custom constraints before solving:

```python
def add_constraints(fs):
model = fs.model
boiler = model.variables['Boiler(Q_th)|flow_rate']
model.add_constraints(boiler >= 10, name='min_boiler')

flow_system.optimize(solver, before_solve=add_constraints)

# Works with rolling_horizon too
flow_system.optimize.rolling_horizon(
solver,
horizon=168,
before_solve=add_constraints
)
```
Comment on lines +332 to +346
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix MD046: use indented code blocks instead of fenced blocks.
markdownlint expects indented blocks in these sections (before_solve, cluster_mode, from_old_dataset). Please convert the fenced blocks accordingly.

🛠️ Sample fix (apply similarly to the other blocks)
-```python
-def add_constraints(fs):
-    model = fs.model
-    boiler = model.variables['Boiler(Q_th)|flow_rate']
-    model.add_constraints(boiler >= 10, name='min_boiler')
-
-flow_system.optimize(solver, before_solve=add_constraints)
-
-# Works with rolling_horizon too
-flow_system.optimize.rolling_horizon(
-    solver,
-    horizon=168,
-    before_solve=add_constraints
-)
-```
+    def add_constraints(fs):
+        model = fs.model
+        boiler = model.variables['Boiler(Q_th)|flow_rate']
+        model.add_constraints(boiler >= 10, name='min_boiler')
+
+    flow_system.optimize(solver, before_solve=add_constraints)
+
+    # Works with rolling_horizon too
+    flow_system.optimize.rolling_horizon(
+        solver,
+        horizon=168,
+        before_solve=add_constraints
+    )

Also applies to: 352-358, 376-378

🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 332-332: Code block style
Expected: indented; Actual: fenced

(MD046, code-block-style)

🤖 Prompt for AI Agents
In `@CHANGELOG.md` around lines 332 - 346, The fenced Python code blocks in
CHANGELOG.md should be converted to indented code blocks to satisfy MD046:
replace the triple-backtick fenced blocks containing the add_constraints
function and its calls (references: add_constraints, flow_system.optimize,
flow_system.optimize.rolling_horizon) with a consistently indented block (four
spaces per code line) so the code is rendered as an indented code block; apply
the same transformation to the other occurrences mentioned (around the
before_solve, cluster_mode, from_old_dataset examples at the other ranges).


#### `cluster_mode` for StatusParameters

New parameter to control status behavior at cluster boundaries:

```python
fx.StatusParameters(
...,
cluster_mode='relaxed', # Default: no constraint at boundaries, prevents phantom startups
# cluster_mode='cyclic', # Each cluster's final status equals its initial status
)
```

#### Comparison Class Enhancements

- **`Comparison.inputs`**: Compare inputs across FlowSystems for easy side-by-side input parameter comparison
- **`data_only` parameter**: Get data without generating plots in Comparison methods
- **`threshold` parameter**: Filter small values when comparing

#### Plotting Enhancements

- **`threshold` parameter**: Added to all plotting methods to filter values below a threshold (default: `1e-5`)
- **`round_decimals` parameter**: Control decimal precision in `balance()`, `carrier_balance()`, and `storage()` plots
- **`flow_colors` property**: Map flows to their component's colors for consistent visualization

#### `FlowSystem.from_old_dataset()`

New method for loading datasets saved with older flixopt versions:

```python
fs = fx.FlowSystem.from_old_dataset(old_dataset)
```

### 💥 Breaking Changes

#### tsam v3 Migration
Expand Down Expand Up @@ -306,17 +421,22 @@ fs.transform.cluster(

- `FlowSystem.weights` returns `dict[str, xr.DataArray]` (unit weights instead of `1.0` float fallback)
- `FlowSystemDimensions` type now includes `'cluster'`
- `statistics.plot.balance()`, `carrier_balance()`, and `storage()` now use `xarray_plotly.fast_bar()` internally (styled stacked areas for better performance)
- `stats.plot.balance()`, `carrier_balance()`, and `storage()` now use `xarray_plotly.fast_bar()` internally (styled stacked areas for better performance)
- `stats.plot.carrier_balance()` now combines inputs and outputs to show net flow per component, and aggregates per component by default

### 🗑️ Deprecated

The following items are deprecated and will be removed in **v7.0.0**:

**Accessor renamed:**

- `flow_system.statistics` → Use `flow_system.stats` (shorter, more convenient)

**Classes** (use FlowSystem methods instead):

- `Optimization` class → Use `flow_system.optimize(solver)`
- `SegmentedOptimization` class → Use `flow_system.optimize.rolling_horizon()`
- `Results` class → Use `flow_system.solution` and `flow_system.statistics`
- `Results` class → Use `flow_system.solution` and `flow_system.stats`
- `SegmentedResults` class → Use segment FlowSystems directly

**FlowSystem methods** (use `transform` or `topology` accessor instead):
Expand Down
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ cff-version: 1.2.0
message: "If you use this software, please cite it as below and consider citing the related publication."
type: software
title: "flixopt"
version: 6.0.0rc17
date-released: 2026-02-02
version: 6.0.2
date-released: 2026-02-05
url: "https://github.com/flixOpt/flixopt"
repository-code: "https://github.com/flixOpt/flixopt"
license: MIT
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ flow_system.optimize(fx.solvers.HighsSolver())

# 3. Analyze results
flow_system.solution # Raw xarray Dataset
flow_system.statistics # Convenient analysis accessor
flow_system.stats # Convenient analysis accessor
```

**Get started with real examples:**
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/08c-clustering.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -585,7 +585,7 @@
"id": "37",
"metadata": {},
"source": [
"## API Reference\n",
"<cell_type>markdown</cell_type>## API Reference\n",
"\n",
"### `transform.cluster()` Parameters\n",
"\n",
Expand Down
1 change: 1 addition & 0 deletions docs/notebooks/08d-clustering-multiperiod.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -556,6 +556,7 @@
"fs = fs.transform.isel(time=slice(0, 168)) # First 168 timesteps\n",
"\n",
"# Cluster (applies per period/scenario)\n",
"# Note: For multi-period systems, only method='replace' is supported\n",
"fs_clustered = fs.transform.cluster(\n",
" n_clusters=10,\n",
" cluster_duration='1D',\n",
Expand Down
91 changes: 77 additions & 14 deletions flixopt/comparison.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,69 @@
_CASE_SLOTS = frozenset(slot for slots in SLOT_ORDERS.values() for slot in slots)


def _extract_nonindex_coords(datasets: list[xr.Dataset]) -> tuple[list[xr.Dataset], dict[str, tuple[str, dict]]]:
"""Extract and merge non-index coords, returning cleaned datasets and merged mappings.

Non-index coords (like `component` on `contributor` dim) cause concat conflicts.
This extracts them, merges the mappings, and returns datasets without them.
"""
if not datasets:
return datasets, {}

# Find non-index coords and collect mappings
merged: dict[str, tuple[str, dict]] = {}
coords_to_drop: set[str] = set()

for ds in datasets:
for name, coord in ds.coords.items():
if len(coord.dims) != 1:
continue
dim = coord.dims[0]
if dim == name or dim not in ds.coords:
continue

coords_to_drop.add(name)
if name not in merged:
merged[name] = (dim, {})
elif merged[name][0] != dim:
warnings.warn(
f"Coordinate '{name}' appears on different dims: "
f"'{merged[name][0]}' vs '{dim}'. Dropping this coordinate.",
stacklevel=4,
)
continue

for dv, cv in zip(ds.coords[dim].values, coord.values, strict=False):
if dv not in merged[name][1]:
merged[name][1][dv] = cv
elif merged[name][1][dv] != cv:
warnings.warn(
f"Coordinate '{name}' has conflicting values for dim value '{dv}': "
f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
stacklevel=4,
)

# Drop these coords from datasets
if coords_to_drop:
datasets = [ds.drop_vars(coords_to_drop, errors='ignore') for ds in datasets]

return datasets, merged
Comment on lines 32 to 78
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Warn when the same non‑index coord name binds to different dims.

If a later dataset has the same coord name on a different dimension, the merge silently keeps the first dim and the coord can be dropped or misapplied. Consider detecting this and warning/short‑circuiting to avoid silent data loss.

⚠️ Possible guard to surface the mismatch
             if name not in merged:
                 merged[name] = (dim, {})
+            elif merged[name][0] != dim:
+                warnings.warn(
+                    f"Coordinate '{name}' appears on multiple dims "
+                    f"({merged[name][0]} vs {dim}). Keeping the first.",
+                    stacklevel=4,
+                )
+                continue
🤖 Prompt for AI Agents
In `@flixopt/comparison.py` around lines 32 - 71, The function
_extract_nonindex_coords silently keeps the first dimension when the same coord
name appears on different dims; update the loop that builds merged to detect if
name is already in merged with a different dim (compare dim vs merged[name][0])
and in that case emit a warnings.warn mentioning the coord name and both dims,
add the coord name to coords_to_drop to avoid applying a mismatched mapping, and
skip merging values for that coord (do not overwrite merged entry); this
prevents silent misapplication of coord mappings while still dropping the
problematic coord from datasets.



def _apply_merged_coords(ds: xr.Dataset, merged: dict[str, tuple[str, dict]]) -> xr.Dataset:
"""Apply merged coord mappings to concatenated dataset."""
if not merged:
return ds

new_coords = {}
for name, (dim, mapping) in merged.items():
if dim not in ds.dims:
continue
new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values])

return ds.assign_coords(new_coords)


def _apply_slot_defaults(plotly_kwargs: dict, defaults: dict[str, str | None]) -> None:
"""Apply default slot assignments to plotly kwargs.

Expand Down Expand Up @@ -256,12 +319,10 @@ def solution(self) -> xr.Dataset:
self._require_solutions()
datasets = [fs.solution for fs in self._systems]
self._warn_mismatched_dimensions(datasets)
self._solution = xr.concat(
[ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)],
dim='case',
join='outer',
fill_value=float('nan'),
)
expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)]
expanded, merged_coords = _extract_nonindex_coords(expanded)
result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
self._solution = _apply_merged_coords(result, merged_coords)
return self._solution

@property
Expand Down Expand Up @@ -324,12 +385,10 @@ def inputs(self) -> xr.Dataset:
if self._inputs is None:
datasets = [fs.to_dataset(include_solution=False) for fs in self._systems]
self._warn_mismatched_dimensions(datasets)
self._inputs = xr.concat(
[ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)],
dim='case',
join='outer',
fill_value=float('nan'),
)
expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)]
expanded, merged_coords = _extract_nonindex_coords(expanded)
result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
self._inputs = _apply_merged_coords(result, merged_coords)
return self._inputs


Expand Down Expand Up @@ -374,7 +433,9 @@ def _concat_property(self, prop_name: str) -> xr.Dataset:
continue
if not datasets:
return xr.Dataset()
return xr.concat(datasets, dim='case', join='outer', fill_value=float('nan'))
datasets, merged_coords = _extract_nonindex_coords(datasets)
result = xr.concat(datasets, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
return _apply_merged_coords(result, merged_coords)

def _merge_dict_property(self, prop_name: str) -> dict[str, str]:
"""Merge a dict property from all cases (later cases override)."""
Expand Down Expand Up @@ -528,7 +589,9 @@ def _combine_data(self, method_name: str, *args, **kwargs) -> tuple[xr.Dataset,
if not datasets:
return xr.Dataset(), ''

return xr.concat(datasets, dim='case', join='outer', fill_value=float('nan')), title
datasets, merged_coords = _extract_nonindex_coords(datasets)
combined = xr.concat(datasets, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
return _apply_merged_coords(combined, merged_coords), title

def _finalize(self, ds: xr.Dataset, fig, show: bool | None) -> PlotResult:
"""Handle show and return PlotResult."""
Expand Down
18 changes: 14 additions & 4 deletions flixopt/components.py
Original file line number Diff line number Diff line change
Expand Up @@ -1144,8 +1144,13 @@ def _relative_charge_state_bounds(self) -> tuple[xr.DataArray, xr.DataArray]:
min_final_da = min_final_da.assign_coords(time=[timesteps_extra[-1]])
min_bounds = xr.concat([rel_min, min_final_da], dim='time')
else:
# Original is scalar - broadcast to full time range (constant value)
min_bounds = rel_min.expand_dims(time=timesteps_extra)
# Original is scalar - expand to regular timesteps, then concat with final value
regular_min = rel_min.expand_dims(time=timesteps_extra[:-1])
min_final_da = (
min_final_value.expand_dims('time') if 'time' not in min_final_value.dims else min_final_value
)
min_final_da = min_final_da.assign_coords(time=[timesteps_extra[-1]])
min_bounds = xr.concat([regular_min, min_final_da], dim='time')

if 'time' in rel_max.dims:
# Original has time dim - concat with final value
Expand All @@ -1155,8 +1160,13 @@ def _relative_charge_state_bounds(self) -> tuple[xr.DataArray, xr.DataArray]:
max_final_da = max_final_da.assign_coords(time=[timesteps_extra[-1]])
max_bounds = xr.concat([rel_max, max_final_da], dim='time')
else:
# Original is scalar - broadcast to full time range (constant value)
max_bounds = rel_max.expand_dims(time=timesteps_extra)
# Original is scalar - expand to regular timesteps, then concat with final value
regular_max = rel_max.expand_dims(time=timesteps_extra[:-1])
max_final_da = (
max_final_value.expand_dims('time') if 'time' not in max_final_value.dims else max_final_value
)
max_final_da = max_final_da.assign_coords(time=[timesteps_extra[-1]])
max_bounds = xr.concat([regular_max, max_final_da], dim='time')

# Ensure both bounds have matching dimensions (broadcast once here,
# so downstream code doesn't need to handle dimension mismatches)
Expand Down
Loading