feat(scorers): artifact scorer suite — bibliography, governance receipt, ARCANA essay#8
Merged
hummbl-dev merged 4 commits intomainfrom Apr 16, 2026
Merged
Conversation
Only registered runner (windows-desktop-1) is offline. Arbiter is pure Python — ubuntu-latest works fine. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds the Arbiter knowledge artifact scoring infrastructure: - ArtifactScorer (ABC) + ArtifactScorerRegistry + _score_from_findings() in artifact_scorer.py — penalty-based weighted scoring, CRITICAL/HIGH/MEDIUM/LOW severity grades, A–F letter grade scale - BibliographyScorer (5 dims): DOI coverage, tier distribution, tag density, completeness, source density - GovernanceReceiptScorer (5 dims): completeness, chain_of_custody, timestamp_validity, evidence_ratio, schema_compliance — EU AI Act Article 12 + NIST AI RMF GOVERN 1.2 alignment - ArcanaEssayScorer (5 dims): empirical_grounding, citation_density, structural_integrity, source_diversity, on_topic_ratio — adversarial gate blocks echo-chamber synthesis (grade F) from entering CLP ledger; rules ARC101–502 including ARC304 (zero-citation block) - All three scorers registered in DEFAULT_REGISTRY via scorers/__init__.py - 51 tests green (35 existing + 16 new for ArcanaEssayScorer) Closes ARCANA → Arbiter gate → CLP ingest flywheel. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
…lock self-grade Self-grade jumps 85.1 → 90.6 (B → A), clearing the >=90 CI gate. - Extract per-dimension helpers from BibliographyScorer.score (CC 29 → ~6) - Auto-fix 7 ruff findings (F401 unused imports, F541, F841, asdict) - Manually rename `l` → `link` (E741) in citation parsing - Drop dead `datetime.date.today().year` line Lint: 88.0 → 100.0 Complexity: 72.6 → 76.6 Overall: 85.1 → 90.6 (A) 138/138 tests pass. No behavioral change — all 51 artifact_scorer tests green. Follow-up (not blocking): arcana_essay_scorer.score (CC 47) and governance_receipt_scorer.score (CC 46) still flag complexity findings. Same extract-helpers pattern applies if/when margin tightens. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… diff gate Diff score 79.7 → 97.3 (clears --fail-under 80). Full repo 90.6 → 97.7 (still clears --fail-under 90). - ArcanaEssayScorer.score: CC 47 → 11. Extracted: _check_structure, _check_citation_density, _check_empirical_grounding, _check_source_diversity, _check_on_topic. - GovernanceReceiptScorer.score: CC 46 → small. Extracted: _check_completeness, _check_chain_of_custody, _check_primary_timestamp, _check_chain_timestamps, _check_evidence, _check_schema. No behavioral change — all 138 tests pass (51 artifact_scorer tests cover both). Same extract-per-dimension pattern used for bibliography_scorer in b4ba862. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This was referenced Apr 16, 2026
hummbl-dev
pushed a commit
that referenced
this pull request
Apr 16, 2026
The artifact scorers (BibliographyScorer, GovernanceReceiptScorer, ArcanaEssayScorer) shipped in PR #8 but had no CLI surface — they were only callable from Python. This wires them into the CLI: arbiter score-artifact --list-types # show registered scorers arbiter score-artifact <file.json> # auto-detect type from JSON arbiter score-artifact <file> --type X # explicit type arbiter score-artifact <file> --json # machine output arbiter score-artifact <file> --fail-under N # CI gate Type resolution: explicit --type wins; otherwise reads top-level "artifact_type" field from the JSON. Closes #10 item 5 partially: - registry exposed via CLI, all 3 scorers usable end-to-end - explicit type or in-file artifact_type works - audit-fleet auto-detection (find *.bib files in repos, etc.) is a separate follow-up — left open in #10 Tests: 13 new in tests/test_score_artifact_cli.py covering --list-types, explicit + inferred type, JSON output, all error paths, --fail-under. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5 tasks
hummbl-dev
added a commit
that referenced
this pull request
Apr 16, 2026
The artifact scorers (BibliographyScorer, GovernanceReceiptScorer, ArcanaEssayScorer) shipped in PR #8 but had no CLI surface — they were only callable from Python. This wires them into the CLI: arbiter score-artifact --list-types # show registered scorers arbiter score-artifact <file.json> # auto-detect type from JSON arbiter score-artifact <file> --type X # explicit type arbiter score-artifact <file> --json # machine output arbiter score-artifact <file> --fail-under N # CI gate Type resolution: explicit --type wins; otherwise reads top-level "artifact_type" field from the JSON. Closes #10 item 5 partially: - registry exposed via CLI, all 3 scorers usable end-to-end - explicit type or in-file artifact_type works - audit-fleet auto-detection (find *.bib files in repos, etc.) is a separate follow-up — left open in #10 Tests: 13 new in tests/test_score_artifact_cli.py covering --list-types, explicit + inferred type, JSON output, all error paths, --fail-under. Co-authored-by: Claude (agent) <claude@agents.hummbl.io> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds the full Arbiter artifact scorer suite — a penalty-based, multi-dimensional quality scoring system for knowledge artifacts before CLP ingest.
artifact_scorer.py— base framework:ArtifactScorer(ABC),ArtifactScorerRegistry,DEFAULT_REGISTRY,_score_from_findings()penalty engine (CRITICAL=-25, HIGH=-15, MEDIUM=-7, LOW=-3), A–F grade scaleBibliographyScorer— 5 dimensions: DOI coverage, tier distribution, tag density, completeness, source densityGovernanceReceiptScorer— 5 dimensions: completeness, chain_of_custody, timestamp_validity, evidence_ratio, schema_compliance; EU AI Act Article 12 + NIST AI RMF GOVERN 1.2 alignedArcanaEssayScorer— adversarial gate for ARCANA synthesis essays; 5 dimensions: empirical_grounding (0.30), citation_density (0.25), structural_integrity (0.20), source_diversity (0.15), on_topic_ratio (0.10); rules ARC101–502; blocks grade F from CLP ingestscorers/__init__.py— registers all 3 scorers intoDEFAULT_REGISTRYTest plan
test_artifact_scorer.py— 51 tests covering all 3 scorers + base framework🤖 Generated with Claude Code