Skip to content

fix: show '—' for Requests/Premium Cost in model table for pure-active sessions (#1191)#1195

Open
microsasa wants to merge 1 commit intomainfrom
fix/1191-render-model-table-dash-for-active-756f83d5611f891b
Open

fix: show '—' for Requests/Premium Cost in model table for pure-active sessions (#1191)#1195
microsasa wants to merge 1 commit intomainfrom
fix/1191-render-model-table-dash-for-active-756f83d5611f891b

Conversation

@microsasa
Copy link
Copy Markdown
Owner

Closes #1191

Problem

_render_model_table in render_summary displayed 0 for Requests and Premium Cost columns when rendering pure-active sessions (still running, no shutdown event). This is misleading because the data is simply not yet available — not actually zero.

Fix

Added a show_requests guard (mirroring the existing logic in render_cost_view) that checks whether any session has shutdown metrics or is no longer active. When all sessions are pure-active, Requests and Premium Cost columns now display "—" instead of "0".

The condition used is:

show_requests = any(s.has_shutdown_metrics or not s.is_active for s in sessions)

This is consistent with the per-session logic already used in render_cost_view (line 686).

Testing

Added test_pure_active_session_shows_dash_for_requests_and_cost to TestRenderSummary which:

  1. Creates a pure-active session (is_active=True, has_shutdown_metrics=False, known model, non-zero output tokens)
  2. Asserts the model name and output token value appear in the Per-Model Breakdown table
  3. Asserts the Requests and Premium Cost cells show "—" (not "0")

Warning

⚠️ Firewall blocked 1 domain

The following domain was blocked by the firewall during workflow execution:

  • pypi.org

To allow these domains, add them to the network.allowed list in your workflow frontmatter:

network:
  allowed:
    - defaults
    - "pypi.org"

See Network Configuration for more information.

Generated by Issue Implementer · ● 14M ·

…e sessions

_render_model_table now checks whether any session has shutdown metrics
or is no longer active before displaying request counts and premium cost.
When all sessions are pure-active (still running, no shutdown event),
these columns show '—' instead of misleading '0' — consistent with the
existing behaviour in render_cost_view.

Closes #1191

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings May 5, 2026 11:22
@microsasa microsasa added the aw Created by agentic workflow label May 5, 2026
@microsasa microsasa enabled auto-merge May 5, 2026 11:22
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the summary report in copilot_usage so the per-model table no longer shows misleading zero request/cost values for pure-active sessions, aligning render_summary more closely with the existing cost-view behavior.

Changes:

  • Added a show_requests guard in _render_model_table to render "—" instead of 0 for Requests and Premium Cost when all summarized sessions are pure-active.
  • Expanded _render_model_table documentation to describe the pure-active display behavior.
  • Added a regression test covering a pure-active session in render_summary.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/copilot_usage/report.py Adjusts per-model summary rendering for request/cost columns in pure-active-session scenarios.
tests/copilot_usage/test_report.py Adds a regression test intended to validate the new pure-active summary-table output.

Comment on lines +364 to +366
# Request/cost data is only meaningful when at least one session has
# completed shutdown metrics or is no longer active.
show_requests = any(s.has_shutdown_metrics or not s.is_active for s in sessions)
Comment on lines +1266 to +1275
# The row must NOT show "0" as the requests count (which would be
# between the model name and the "—" or the output tokens).
# Split by the model name and check the numeric columns that follow.
after_model = model_row.split("claude-sonnet-4", 1)[1]
# Strip leading/trailing and split by whitespace for column values
cols = after_model.split()
# cols should be: [Requests, PremiumCost, InputTokens, OutputTokens,
# CacheRead, CacheWrite]
assert cols[0] == "—", f"Requests column should be '—', got '{cols[0]}'"
assert cols[1] == "—", f"Premium Cost column should be '—', got '{cols[1]}'"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

aw Created by agentic workflow

Projects

None yet

2 participants