Skip to content

Commit ff84e86

Browse files
authored
Merge pull request #1 from adewale/claude/tuftean-marginalia-viz-TB0fw
Marginalia: figures, contracts, and journey-section rendering
2 parents 12f6f6a + 5cbc4f0 commit ff84e86

51 files changed

Lines changed: 6138 additions & 1017 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.gitattributes

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# `src/asset_manifest.py` is generated by `scripts/fingerprint_assets.py`.
2+
# On merge/rebase, keep our side of the conflict — the post-merge and
3+
# post-rewrite hooks regenerate the file deterministically afterwards.
4+
# This works once `scripts/install-git-hooks.sh` has been run locally,
5+
# which registers `merge.ours.driver = true` and points `core.hooksPath`
6+
# at `.githooks/`.
7+
src/asset_manifest.py merge=ours

.githooks/post-merge

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
#!/usr/bin/env bash
2+
# Regenerate the asset manifest after a merge or pull so the digest
3+
# reflects the merged tree, not whichever parent won the conflict.
4+
set -e
5+
cd "$(git rev-parse --show-toplevel)"
6+
uv run python scripts/fingerprint_assets.py >/dev/null
7+
if ! git diff --quiet src/asset_manifest.py public/_headers; then
8+
echo "post-merge: asset manifest regenerated; stage and amend if needed"
9+
fi

.githooks/post-rewrite

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
#!/usr/bin/env bash
2+
# Regenerate the asset manifest after rebase/amend so the digest matches
3+
# the rewritten history, not whichever commit happened to win each step.
4+
set -e
5+
cd "$(git rev-parse --show-toplevel)"
6+
uv run python scripts/fingerprint_assets.py >/dev/null
7+
if ! git diff --quiet src/asset_manifest.py public/_headers; then
8+
echo "post-rewrite: asset manifest regenerated; stage and amend if needed"
9+
fi

.github/workflows/preview-viz.yml

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
name: Preview viz
2+
3+
on:
4+
push:
5+
branches:
6+
- claude/tuftean-marginalia-viz-TB0fw
7+
workflow_dispatch:
8+
9+
permissions:
10+
contents: read
11+
12+
concurrency:
13+
group: preview-viz
14+
cancel-in-progress: true
15+
16+
jobs:
17+
upload-preview:
18+
runs-on: ubuntu-latest
19+
steps:
20+
- uses: actions/checkout@v4
21+
- uses: astral-sh/setup-uv@v5
22+
with:
23+
enable-cache: false
24+
- uses: actions/setup-python@v5
25+
with:
26+
python-version: '3.13'
27+
- uses: actions/setup-node@v4
28+
with:
29+
node-version: '22'
30+
- name: Install dependencies
31+
run: uv sync --all-groups
32+
- name: Build generated assets
33+
run: make build
34+
- name: Verify Cloudflare auth
35+
env:
36+
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
37+
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
38+
run: npx --yes wrangler whoami
39+
- name: Sync Python Workers vendor
40+
run: uv run pywrangler sync
41+
- name: Upload Cloudflare Preview
42+
env:
43+
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
44+
CLOUDFLARE_ACCOUNT_ID: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
45+
run: |
46+
set -x
47+
uv run pywrangler preview \
48+
--name viz \
49+
--message "${{ github.sha }}" \
50+
--json
51+
- name: Smoke test deployed Preview
52+
run: |
53+
set -euo pipefail
54+
base="https://viz-pythonbyexample.adewale-883.workers.dev"
55+
for path in \
56+
"/" \
57+
"/examples/values" \
58+
"/prototyping/journey-figures-gestalt"; do
59+
url="${base}${path}"
60+
echo "Checking ${url}"
61+
curl --fail --show-error --silent --location --output /tmp/preview-smoke.html --write-out "%{http_code} %{url_effective}\n" "${url}"
62+
if grep -qiE "error code: 1101|PythonError|Traceback" /tmp/preview-smoke.html; then
63+
echo "Preview rendered an exception for ${url}"
64+
head -200 /tmp/preview-smoke.html
65+
exit 1
66+
fi
67+
done
68+
- name: Dump wrangler logs on failure
69+
if: failure()
70+
run: |
71+
find ~ /tmp /root -name "*.log" -path "*wrangler*" 2>/dev/null | while read f; do
72+
echo "=== $f ==="
73+
tail -300 "$f" || true
74+
done

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,12 @@ Install dependencies with `uv`, then run:
6363
python3 -m unittest discover -s tests -v
6464
```
6565

66+
After cloning, install the local git hooks once so merges and rebases regenerate `src/asset_manifest.py` instead of producing conflicts:
67+
68+
```bash
69+
./scripts/install-git-hooks.sh
70+
```
71+
6672
Run locally on Workers:
6773

6874
```bash

docs/example-figure-rubric.md

Lines changed: 209 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,209 @@
1+
# Example figure rubric
2+
3+
Parallel to `docs/journey-visualisation-rubric.md`, but for the figures
4+
that attach to **example pages** (literate-program lessons), not journey
5+
sections. The journey rubric scores the figure beside a section heading;
6+
this one scores the figure that sits between prose and code inside a
7+
single cell of an example walkthrough.
8+
9+
The two rubrics share craft criteria (palette, primitives, emphasis
10+
scarcity) and diverge on content criteria, because the audience and
11+
task differ. A journey-section figure depicts the *conceptual shift*
12+
unifying multiple lessons; an example figure depicts the *single move*
13+
the surrounding cell discusses.
14+
15+
Score each example figure on a 10-point scale. Version 2 of this
16+
rubric, applied 2026-05; see `docs/rubric-saturation.md` for the
17+
reasoning that produced these upgrades. The previous criterion 2
18+
("match the running variables") and criterion 5 ("caption asserts")
19+
have been replaced; a new page-level coherence rubric joins the
20+
per-figure scoring.
21+
22+
## Content (5.5)
23+
24+
1. **Cell fidelity (0-1.5)** — the figure depicts the move the cell's
25+
prose discusses, not the example's title. If the example is
26+
"Mutability" but cell 1 is about immutable strings, a figure on
27+
cell 1 must depict immutability, not aliasing. Wrong cell, wrong
28+
figure.
29+
2. **The figure earns its place (0-1.0)** — the figure surfaces
30+
something the prose cannot show in the same word count: a
31+
relationship, a before/after, a hidden mechanism, an invariant.
32+
A figure that merely restates the prose in diagram form earns
33+
0.5; a figure that adds nothing the prose hasn't already said
34+
earns 0. Generic placeholders (`a`, `b`, `xs`) are fine; what
35+
matters is whether the figure carries pedagogical weight beyond
36+
the prose. (Replaces v1's "match the running variables", which
37+
punished honest reuse of library figures across multiple cells.)
38+
3. **One conceptual move (0-1.0)** — exactly one shift, before-state
39+
to after-state, or one mechanism. Squint test: a reader should
40+
identify the figure's single point in two seconds.
41+
4. **Mechanism over metaphor (0-1.0)** — the figure shows the actual
42+
machinery (the cell, the binding, the dispatch, the iterator),
43+
not a cartoon of it. Knuth's rule.
44+
5. **Caption quality (0-1.0)**`figcaption` declares what is true,
45+
in the section summary's voice; it does not narrate what the
46+
figure does. "Two names share one mutable list — appending
47+
through one name changes the object visible through both."
48+
earns 1.0. "The figure shows two names pointing at one list."
49+
earns 0 (narration, not assertion). Mixed-voice captions earn
50+
0.5. The SVG itself contains no prose duplicating the caption;
51+
only diagrammatic labels (`stdout`, `iter()`, panel tags, type
52+
signatures). See pipeline invariant 2 in the spec.
53+
54+
## Craft (3.0)
55+
56+
6. **Grammar conformance (0-1.0)** — composed exclusively from
57+
`Canvas` primitives in `src/marginalia_grammar.py`. No bespoke
58+
SVG, no new colours, no stroke weights outside the locked set.
59+
7. **Emphasis scarcity (0-1.0)** — at most one accent mark per
60+
figure. The accent goes on the single element the cell prose
61+
names (the live mutation, the captured cell, the dispatch arrow).
62+
Three accent marks competing for attention is no emphasis at all.
63+
8. **Restraint (0-1.0)** — no decoration that does not carry
64+
information. No drop shadows, gradients, ornamental rules,
65+
non-orthogonal tilts, or marks placed for "balance".
66+
67+
## Context (1.5)
68+
69+
9. **Banner-row fit (0-1.0)** — the figure's intrinsic width sits
70+
comfortably inside `.cell-banner`'s auto-fit grid. Intrinsic widths
71+
beyond ~360 px clamp to the column without growing past it; much
72+
narrower viewBoxes leave whitespace either side of the centred
73+
figure. Aim for an intrinsic viewBox between 200 and 360 px wide.
74+
10. **Pairs with the surrounding cell (0-0.5)** — the banner sits
75+
AFTER the named cell, so the eye reads cell-prose → cell-code →
76+
banner. The figure should summarise the move the surrounding
77+
cell just made, not stand alone as a generic illustration of the
78+
example title.
79+
80+
## Topic gates (cell-shape specific)
81+
82+
- **Binding cells** (assignments, `=`) — show the name-arrow with the
83+
type tag and the resulting value. The canonical Python picture.
84+
- **Mutation cells** — show before-state and after-state with the
85+
same object identity, OR rebinding with a new identity. The
86+
difference is the lesson.
87+
- **Iteration cells** — show the iterator advance: a caret moving,
88+
or `iter()`+`next()` producing values one at a time.
89+
- **Function-definition cells** — show the signature with parameter
90+
separators (`/`, `*`) explicit when relevant, or the
91+
caller→body→return shape.
92+
- **Class cells** — show state and methods bundled, or the
93+
instance→class→type triangle, or MRO chain. Pick one, not all.
94+
- **Exception cells** — show the lanes (try/except/else/finally)
95+
with a single traced path, or the exception-cause arrow (`__cause__`
96+
vs `__context__`).
97+
- **Async cells** — show two parallel lanes (loop · coroutine) with
98+
await handoffs.
99+
100+
## Release gates outside the score
101+
102+
These are not scored; a figure that violates any of them does not
103+
ship. The geometry, palette, font, stroke, emphasis, registration,
104+
and caption gates are now enforced by automated contracts in
105+
`tests/test_marginalia_geometry.py` (Contracts 1-9). CI fails before
106+
the figure can merge.
107+
108+
- **One figure per cell, at most.** Two figures on one cell signal
109+
the cell is doing two things; split the cell instead.
110+
- **figcaption present and declarative.** Captions in the form
111+
"Two names share one mutable list — appending through one name
112+
changes the object visible through both." Not "this shows X" or
113+
"see how Y".
114+
- **figcaption agrees with the cell's prose.** The cell's prose
115+
paragraph in the markdown and the figure's figcaption assert the
116+
same thing in different words. If they disagree, one is wrong.
117+
- **figcaption is unique across slugs.** A reused figure can serve
118+
multiple lessons (`iter-protocol` attaches to four), but each
119+
lesson must frame the figure in its own voice. Verbatim caption
120+
reuse copies the lesson voice the same way verbatim code reuse
121+
copies the example. *Contract 5b — FigureCaptionContract.*
122+
- **No clipping.** Every `<rect>`, `<text>`, `<line>`, `<circle>`,
123+
`<path>` lives inside the padded viewBox. Text width counts: a
124+
long mono string in a too-narrow box clips even if the geometry
125+
looks right at first glance. *Contract 1.*
126+
- **No element collision.** Text that overlaps a rect must be
127+
fully contained by that rect. A type tag sitting on top of the
128+
box above it (the `/examples/values` STR-LIST-DICT bug) is the
129+
canonical violation. *Contract 2.*
130+
- **No text-text overlap.** Two text elements may not occupy
131+
overlapping bounding boxes (the `itertools-chain` "ITER A" /
132+
"1 · 2" collision in a too-narrow box). *Contract 3.*
133+
- **Palette discipline.** Only `INK`, `INK_SOFT`, `EMPHASIS`,
134+
`SOFT_FILL`, or `"none"` may appear as fill or stroke. *Contract
135+
5a — FigureGrammarContract.*
136+
- **Font discipline.** Only `FONT_SERIF`, `FONT_MONO`, `FONT_SANS`
137+
may appear as `font-family`. *Contract 5b.*
138+
- **Stroke-weight discipline.** Only `W_HAIRLINE`, `W_STROKE`,
139+
`W_EMPHASIS`, `W_GHOST`. *Contract 5c.*
140+
- **Emphasis scarcity, enforced.** At most ONE accent mark
141+
(`EMPHASIS`-coloured arrowhead, caret, dot, or rect stroke) per
142+
figure. Was a soft v1 criterion; now hard. *Contract 9.*
143+
- **Banner-fit, enforced.** Every figure's intrinsic width
144+
(Canvas.w + 2 · PAD_X) must fit `.cell-banner--1`'s 440px max
145+
ceiling. *Contract 8.*
146+
- **Twin consistency.** When two figures depict parallel concepts
147+
(`kw-only-separator``positional-only-separator`,
148+
`class-triangle``metaclass-triangle`), their metrics must
149+
match coordinate-for-coordinate where the concepts coincide. A
150+
fix to one is a fix to both, in the same commit.
151+
- **Geometric termination.** Lines that connect to dots, circles,
152+
or rects must terminate AT the element's edge — not 1-2px short
153+
(looks disconnected) and not inside the glyph (looks broken).
154+
When in doubt, end the line at the centre and let the dot draw
155+
on top.
156+
- **Mono character alignment.** When a vertical divider marks a
157+
position in mono text, its x must match the character's actual
158+
centre. JetBrains Mono advances ~6px per char at fs=10. A
159+
visually-similar `82` and `75` are not interchangeable.
160+
- **Pipeline invariants** (see spec) hold: SVG renders at intrinsic
161+
size; SVG contains no prose duplicating the caption.
162+
- **Gestalt = production.** Review pages under `/prototyping/*`
163+
must render the same paint code as the production attachments.
164+
Parallel `e_*` paint functions for "gestalt versions" drift from
165+
production and hide bugs; we eliminated 76 of them in May 2026.
166+
167+
## Page-level coherence (per slug, multi-figure)
168+
169+
A separate 0-1.0 score applied to slugs whose `ATTACHMENTS[slug]`
170+
list contains more than one figure. Multi-figure pages must form a
171+
coherent set, not three angles on the same point.
172+
173+
- **1.0** — figures show distinct aspects of the lesson in a
174+
natural reading order (intro picture, mid-walkthrough mechanism,
175+
summary). Each banner earns its placement.
176+
- **0.5** — figures are individually fine but redundant; one would
177+
do the work of two. The page reads as cluttered.
178+
- **0** — figures contradict each other, or one figure is on the
179+
wrong cell, or the page has three figures where one would teach
180+
better.
181+
182+
For single-figure slugs (today, all 109 of them), page coherence is
183+
trivially 1.0 and does not enter the per-figure score. As multi-
184+
figure attachments grow this criterion will become the discriminator
185+
that prevents the "more figures is better" failure mode.
186+
187+
## Quality bands
188+
189+
- **9.0-10.0** — depicts the cell's move in two seconds; the figcaption
190+
could only describe this figure; reads pleasantly on return visits.
191+
- **8.0-8.9** — depicts the right move but uses generic placeholders
192+
where specific names would land harder, or the caption hedges, or
193+
one secondary mark steals attention from the primary one.
194+
- **7.0-7.9** — depicts the cell but loses something in scope: shows
195+
the example title rather than the specific cell's move; or topic
196+
gate not satisfied.
197+
- **below 7.0** — wrong cell, wrong shape, multiple primary ideas
198+
competing, or accent marks scattered rather than scarce. Redesign
199+
before promoting.
200+
201+
## Project gate
202+
203+
A cell figure may ship to production once it scores **≥ 8.5**. The
204+
example's figure average should exceed **8.7** so a multi-figure
205+
example reads as a coherent set rather than independently authored
206+
diagrams.
207+
208+
The score is a guide, not a substitute for reading the cell beside
209+
its surrounding prose.

docs/example-graph-score-impact.md

Lines changed: 0 additions & 32 deletions
This file was deleted.

0 commit comments

Comments
 (0)