Score your AI coding sessions for orchestration quality — directly inside Cursor.
SessionStellar scores AI sessions across 5 weighted metrics:
| Metric | Weight | What it measures |
|---|---|---|
| Skill Diversity | 20% | Range of tools and skills invoked |
| Decision Depth | 25% | Explicit tradeoffs and reasoning quality |
| Error Recovery | 20% | How errors are handled and recovered from |
| Compound Learning | 20% | Cross-step insights building on context |
| Orchestration Mastery | 15% | Agent coordination and balance |
Scores run offline — no data leaves your machine.
Search for SessionStellar in the Cursor marketplace, or run:
/add-plugin sessionstellar
Use the /score-session skill to analyze the current conversation:
/score-session
Use the /score-file skill with an exported session transcript:
/score-file path/to/session.md
The plugin exposes two MCP tools:
score_session— Score session content passed as textscore_session_file— Score a session file by path
These are available to the AI agent automatically.
The plugin includes an optional rule (orchestration-quality) with patterns for producing higher-quality AI orchestration: tool diversity, explicit decision-making, error recovery strategies, and compound learning.
Enable it in Cursor settings to get coaching toward better orchestration patterns.
For scoring outside Cursor — in CI/CD pipelines, git hooks, or the terminal:
npx sessionstellar score session.mdSee the CLI package for GitHub Actions and GitLab CI examples.
Track scores over time and benchmark against the community at sessionstellar.com.
MIT