You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: use usage_events as primary spend source, enrich user detail page
- Fix spend discrepancy across dashboard, user detail, cron alerts, and
model efficiency by preferring usage_events over stale daily_spend/spending
tables (billing groups API has ~2 day retention)
- Enrich user detail page: accept rate KPI, team rank KPI, power user
radar axis, Tools & Features section (MCP tools + commands), model
preferences breakdown
- Add per-user MCP and commands collection from by-user analytics API
endpoints with new DB tables and collector tasks
- Default dashboard time range to 30d with localStorage persistence
- Update context files with API data reliability knowledge
Co-authored-by: Cursor <cursoragent@cursor.com>
Copy file name to clipboardExpand all lines: .cursor/rules/cursor-api-data-guide.mdc
+55-2Lines changed: 55 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -88,6 +88,9 @@ Key concepts that affect how we interpret the data:
88
88
- `included_spend_cents` = the portion covered by the plan
89
89
- Actual overage = `spend_cents - included_spend_cents`
90
90
91
+
### Model Pricing
92
+
Cursor charges at provider list prices (Anthropic, OpenAI, Google, xAI) plus a Teams/Enterprise surcharge of $0.25/1M total tokens. Max mode adds +20% on top. Auto mode has fixed blended rates. There is no public API for pricing tables — the canonical source is `cursor.com/docs/models`. Per-model token prices are NOT needed in our code because `usage_events.total_cents` already has the computed cost per request.
93
+
91
94
### Model Cost Drivers
92
95
Model choice is the PRIMARY cost driver. The specific models change over time, but the cost principles are stable:
93
96
@@ -120,12 +123,15 @@ Legacy field from the old fixed-pricing model. It's NOT the current billing mech
120
123
121
124
### Now Collected
122
125
- Per-request token/cost data from `/teams/filtered-usage-events` - gives per-model cost breakdown per user. Stored in `usage_events` table. Collected incrementally (since last timestamp).
123
-
- Command adoption from `/analytics/team/commands` - which Cursor commands people use (explain, refactor, etc.). Stored in `analytics_commands` table.
126
+
- Command adoption from `/analytics/team/commands` - team-level command usage. Stored in `analytics_commands` table.
124
127
- Plan mode adoption from `/analytics/team/plans` - plan mode usage by model. Stored in `analytics_plans` table.
128
+
- Per-user MCP tool usage from `/analytics/by-user/mcp` - which MCP tools each user uses. Stored in `analytics_user_mcp` table.
129
+
- Per-user command usage from `/analytics/by-user/commands` - which commands each user uses. Stored in `analytics_user_commands` table.
125
130
126
131
### Not Currently Collected (but available)
127
132
- AI Code Tracking data from `/analytics/ai-code/commits` - would give us accurate AI vs human line attribution
- Per-user breakdowns from `/analytics/by-user/*` endpoints for: agent-edits, tabs, models, plans, ask-mode, client-versions, top-file-extensions (we collect mcp and commands per-user, but not these others)
134
+
- Leaderboard from `/analytics/team/leaderboard` - ranks users by tab accepts and agent edits. We chose NOT to collect this because it introduces a third ranking system that conflicts with our own spend_rank and activity_rank, confusing stakeholders.
129
135
- `cmdkUsages`, `subscriptionIncludedReqs`, `apiKeyReqs`, `bugbotUsages` - available in daily usage but not stored
@@ -151,3 +157,50 @@ Low accept rate could mean: picky reviewer (good), bad prompting (fixable), or w
151
157
`lines_added / agent_requests`
152
158
153
159
Highly task-dependent. A debugging session produces 0 lines. A scaffolding task produces 500. Not a quality metric.
160
+
161
+
## Daily Spend Data Sources
162
+
163
+
`usage_events` (from `/teams/filtered-usage-events`) is the most reliable source for daily spend data. It has per-request cost (`total_cents`) with full billing cycle history and no retention window. `daily_spend` (from `/teams/groups` billing groups API) has only ~2 days retention and systematically underreports compared to `usage_events`.
164
+
165
+
The dashboard daily spend chart uses `usage_events` as the primary source, falling back to `daily_spend` only when the `usage_events` table is empty (e.g., a fresh install that hasn't collected events yet). The chart marks the last 2 days as "provisional" since spend data for today/yesterday may still be accumulating.
166
+
167
+
## Conversation Insights (Dashboard-Only)
168
+
169
+
The Cursor web dashboard has a "Conversation Insights" page (`cursor.com/dashboard?tab=conversation-insights`) that shows Work Type (KTLO/Feature/Bug), Intent Distribution (Write Code/Ask/Task Automation/Plan), Categories (Bug Fix/Configuration/Feature/Refactor), Task Complexity, and Prompt Specificity. This data is computed server-side from conversation content using AI analysis. There is NO API endpoint for it — it is a dashboard-only enterprise feature.
## Critical: Billing Groups API Daily Spend Retention
198
+
199
+
The `/teams/groups` endpoint returns `dailySpend` per member, but this data has a **very short retention window — approximately 2 days**. Older daily spend data is dropped from the API response entirely.
200
+
201
+
This means:
202
+
- If you don't collect at least once per day, you will permanently lose daily spend granularity for missed days
203
+
- Early-day collections capture incomplete data (spend accumulates throughout the day)
204
+
- The `upsertDailySpend` function uses `MAX(existing, new)` to prevent regressions from partial data overwriting complete data
205
+
- Ideal collection frequency: at least twice daily (e.g. midday + end of day) to capture most of each day's spend before it falls off the API
206
+
- The dashboard marks the last 2 days as "partial (API lag)" since spend data may not be fully settled yet
Copy file name to clipboardExpand all lines: .cursor/rules/project-context.mdc
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -65,9 +65,9 @@ Single cron endpoint `POST /api/cron` does both: collect → detect → alert in
65
65
66
66
## Dashboard Pages
67
67
68
-
- `/` — Team overview: stat cards, spend bar chart, daily spend trend, spend breakdown by user, members table with search/sort, **group filter dropdown**, time range picker (24h/3d/7d/14d/30d), billing cycle progress
68
+
- `/` — Team overview: stat cards, model cost comparison table ($/request relative multipliers), daily spend trend (sourced from `usage_events` with `daily_spend` fallback, last 2 days marked provisional), spend breakdown by user, members table with search/sort, **group filter dropdown**, time range picker (24h/3d/7d/14d/30d), billing cycle progress
69
69
- `/insights` — Analytics: DAU chart, model adoption, model efficiency rankings, MCP tool usage, file extensions, client versions
70
-
- `/users/[email]` — Per-user: token timeline, model pie chart, feature breakdown, activity profile, anomaly history
70
+
- `/users/[email]` — Per-user detail: KPI cards (cycle spend, $/req, agent reqs, accept rate, team rank), spend trend chart, usage profile radar (activity, intensity, tab usage, precision, on plan, power user), cost breakdown by model, tools & features (MCP tools + commands per user), model preferences, daily activity table, anomaly history
71
71
- `/anomalies` — MTTD/MTTI/MTTR metrics, open incidents (acknowledge/resolve), anomaly table
72
72
- `/settings` — Detection thresholds, **billing group management** (rename, assign, create), **HiBob CSV import** with change preview
73
73
@@ -90,7 +90,7 @@ Single cron endpoint `POST /api/cron` does both: collect → detect → alert in
0 commit comments