diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl
index 59db220..39e3e2d 100644
--- a/.beads/issues.jsonl
+++ b/.beads/issues.jsonl
@@ -129,6 +129,10 @@
{"id":"docs-z6h.5","title":"Add small mobile breakpoint (\u003c 480px) and pagination stacking","description":"## Files\n- documentation/styles/globals.css (modify)\n\n## What to do\nAdd a small mobile breakpoint for very narrow screens (iPhone SE at 375px, older Android at 360px). Also stack pagination cards vertically on mobile.\n\nAdd a new media query AFTER the existing mobile block (after line 1034):\n\n```css\n/* Small mobile: extra narrow screens */\n@media (max-width: 480px) {\n .sidebar {\n width: 85vw;\n min-width: 85vw;\n }\n\n .layout-header-inner {\n padding: 0 var(--space-3);\n }\n\n .layout-content {\n padding: var(--space-4) var(--space-3);\n }\n}\n```\n\nAlso add pagination stacking inside the existing `@media (max-width: 768px)` block (before line 1034):\n\n```css\n/* Stack pagination on mobile */\n.pagination {\n flex-direction: column;\n}\n\n.pagination-link {\n width: 100%;\n}\n\n.pagination-link-next {\n text-align: left;\n}\n```\n\nContext:\n- The sidebar is fixed at 250px width (line 108). On a 375px screen, that leaves only 125px visible content — too cramped. Using 85vw gives ~319px sidebar with some visible background.\n- Header inner padding is `0 var(--space-8)` (32px, line 195). On tiny screens, reduce to 12px.\n- Pagination uses `display: flex` with `justify-content: space-between` (lines 527-534). On mobile, prev/next cards should stack vertically.\n\n## Test\n```bash\ncd documentation \u0026\u0026 node -e \"\nconst fs = require(\\\"fs\\\");\nconst css = fs.readFileSync(\\\"styles/globals.css\\\", \\\"utf8\\\");\nconst hasSmallMobile = css.includes(\\\"max-width: 480px\\\");\nconst has85vw = /\\\\.sidebar[^}]*width:\\\\s*85vw/.test(css);\nconst mobileBlock = css.split(\\\"@media (max-width: 768px)\\\").slice(1).join(\\\"\\\");\nconst hasPaginationStack = /\\\\.pagination[^}]*flex-direction:\\\\s*column/.test(mobileBlock);\nif (!hasSmallMobile) { console.error(\\\"FAIL: no 480px breakpoint\\\"); process.exit(1); }\nif (!has85vw) { console.error(\\\"FAIL: no 85vw sidebar\\\"); process.exit(1); }\nif (!hasPaginationStack) { console.error(\\\"FAIL: no pagination stacking\\\"); process.exit(1); }\nconsole.log(\\\"PASS\\\");\n\"\n```\n\n## Dont\n- Do not modify the existing breakpoints\n- Do not change desktop styles\n- Do not modify any JS files\n- Do not change the sidebar width CSS variable — override the width directly on .sidebar","status":"closed","priority":2,"issue_type":"task","assignee":"sharfy-test.climateai.org","owner":"sharfy-test.climateai.org","created_at":"2026-02-16T23:34:17.12088+13:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-02-16T23:39:44.212351+13:00","closed_at":"2026-02-16T23:39:44.212351+13:00","close_reason":"b2ca35e Add small mobile breakpoint (\u003c480px) and pagination stacking","labels":["scope:small"],"dependencies":[{"issue_id":"docs-z6h.5","depends_on_id":"docs-z6h","type":"parent-child","created_at":"2026-02-16T23:34:17.122179+13:00","created_by":"sharfy-test.climateai.org"}]}
{"id":"hypercerts-atproto-documentation-5sb","title":"Epic: Fix last-updated date rendering bug — DOM injection → React component","description":"The LastUpdated component uses DOM injection (useEffect + appendChild) which races with Markdoc content rendering, causing the date to appear above the h1 on some pages. Fix: rewrite as a proper React component rendered inside \u003carticle\u003e after {children}. Changes already written on branch fix/last-updated-bug — needs commit, push, and PR. Success: last-updated date always appears at the bottom of the article content, never above the h1.","status":"open","priority":1,"issue_type":"epic","owner":"sharfy-test.climateai.org","created_at":"2026-03-06T18:09:25.532372+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-06T18:09:25.532372+08:00","labels":["scope:trivial"]}
{"id":"hypercerts-atproto-documentation-5sb.1","title":"Commit, push, and PR the last-updated bug fix (branch fix/last-updated-bug)","description":"## Files\n- components/LastUpdated.js (modified — already on branch)\n- components/Layout.js (modified — already on branch)\n\n## What to do\nThe changes are already written on branch fix/last-updated-bug. Just:\n1. Verify npm run build succeeds\n2. git add components/LastUpdated.js components/Layout.js\n3. git commit -m 'fix: rewrite LastUpdated as React component to fix rendering position'\n4. git push -u origin fix/last-updated-bug\n5. gh pr create targeting main\n\nThe changes: LastUpdated.js was rewritten from DOM injection (useEffect + appendChild) to a proper React component that returns \u003cp className='last-updated'\u003e. Layout.js moved \u003cLastUpdated /\u003e from before \u003carticle\u003e to inside \u003carticle\u003e after {children}.\n\n## Don't\n- Modify the changes — they're already correct\n- Touch lib/lastUpdated.json (it has regenerated dates, that's expected)","acceptance_criteria":"1. PR created targeting main. 2. npm run build succeeds. 3. LastUpdated.js does not import useEffect. 4. Layout.js renders \u003cLastUpdated /\u003e inside \u003carticle\u003e after {children}.","status":"open","priority":1,"issue_type":"task","owner":"sharfy-test.climateai.org","estimated_minutes":10,"created_at":"2026-03-06T18:09:35.388648+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-06T18:09:35.388648+08:00","labels":["scope:trivial"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-5sb.1","depends_on_id":"hypercerts-atproto-documentation-5sb","type":"parent-child","created_at":"2026-03-06T18:09:35.389834+08:00","created_by":"sharfy-test.climateai.org"}]}
+{"id":"hypercerts-atproto-documentation-7fu","title":"Epic: Full-text search — index page content and search by keyword","description":"## Problem\nThe search bar (Cmd+K) only matches against page titles from the hardcoded navigation tree. It cannot find keywords in page body content, headings, or descriptions. For example, searching 'OAuth' returns nothing because no page is titled 'OAuth'.\n\n## Goals\n1. Build a search index at build time that includes page titles, descriptions, headings (h2/h3), and body text from all .md files\n2. Use a lightweight client-side search library (FlexSearch) for fast full-text keyword search\n3. Show search results with matched context snippets so users can see where the keyword appears\n4. Keep the existing UX (Cmd+K modal, keyboard navigation, quick links) intact\n\n## Key files\n- components/SearchDialog.js — current search UI with fuzzy title matching\n- lib/navigation.js — current data source (title + path only)\n- styles/globals.css — search dialog styles (lines 1537-1665)\n- pages/**/*.md — content to be indexed\n\n## Architecture\n- Build script generates public/search-index.json from all .md files\n- SearchDialog loads index on first open, initializes FlexSearch\n- Results show title, section, and a snippet of matched body text\n- Field boosting: title matches rank higher than body matches\n\n## Scope\nSearch dialog replacement + build script. No changes to page rendering or navigation.","status":"open","priority":1,"issue_type":"epic","owner":"sharfy-test.climateai.org","created_at":"2026-03-09T16:50:30.512727+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-09T16:50:30.512727+08:00","labels":["scope:medium"]}
+{"id":"hypercerts-atproto-documentation-7fu.1","title":"Build-time search index generator — extract titles, headings, descriptions, and body text from all .md files","description":"## Files\n- lib/generate-search-index.js (create)\n- package.json (modify — add to build/dev scripts)\n\n## What to do\nCreate a Node.js build script that reads all .md files under pages/, extracts searchable content, and writes a JSON index to public/search-index.json.\n\n### Script behavior:\n1. Walk pages/ recursively, find all .md files (same pattern as lib/generate-last-updated.js)\n2. For each file, extract:\n - `path`: route path (e.g. \"/getting-started/quickstart\") — same logic as generate-last-updated.js\n - `title`: from YAML frontmatter `title` field (between `---` delimiters at top of file)\n - `description`: from YAML frontmatter `description` field\n - `headings`: array of h2/h3 text (lines starting with `## ` or `### `)\n - `body`: full plain text of the markdown with frontmatter, code blocks, Markdoc tags (`{% ... %}`), markdown syntax (`#`, `*`, `[`, etc.), and HTML tags stripped out. Collapse multiple whitespace/newlines into single spaces. Trim to max 5000 chars per page.\n3. Look up the section for each path from the navigation tree. Import `flattenNavigation` from lib/navigation.js — but since this is a CommonJS script and navigation.js uses ESM exports, instead parse the navigation structure manually or use a simple path-to-section mapping. The simplest approach: hardcode a function that maps path prefixes to sections:\n - /getting-started/* → 'Get Started'\n - /core-concepts/* → 'Core Concepts'\n - /tools/* → 'Tools'\n - /architecture/* → 'Architecture'\n - /lexicons/* → 'Reference'\n - /reference/* → 'Reference'\n - /ecosystem/* → 'Ecosystem \u0026 Vision'\n - /roadmap → 'Reference'\n - / → 'Get Started'\n4. Write output as JSON array to public/search-index.json:\n```json\n[\n {\n \"path\": \"/getting-started/quickstart\",\n \"title\": \"Quickstart\",\n \"description\": \"Create your first hypercert...\",\n \"section\": \"Get Started\",\n \"headings\": [\"Install dependencies\", \"Authenticate\", ...],\n \"body\": \"This guide walks through creating...\"\n },\n ...\n]\n```\n\n### Frontmatter parsing:\nSimple regex — no YAML library needed:\n```js\nconst fmMatch = content.match(/^---\\n([\\s\\S]*?)\\n---/);\n// then extract title: and description: lines\n```\n\n### Markdoc/markdown stripping:\nRemove these patterns from body text:\n- Frontmatter block (`---...---`)\n- Code blocks (`\\`\\`\\`....\\`\\`\\``)\n- Markdoc tags (`{% callout ... %}`, `{% /callout %}`, `{% table %}`, etc.)\n- Heading markers (`# `, `## `, `### `)\n- Markdown links: `[text](url)` → `text`\n- Markdown bold/italic: `**text**` → `text`, `*text*` → `text`\n- Inline code backticks\n- HTML tags\n- Collapse whitespace\n\n### package.json changes:\nUpdate both scripts to run the index generator before next:\n```json\n\"dev\": \"node lib/generate-search-index.js \u0026\u0026 node lib/generate-last-updated.js \u0026\u0026 next dev --webpack\",\n\"build\": \"node lib/generate-search-index.js \u0026\u0026 node lib/generate-last-updated.js \u0026\u0026 next build --webpack\"\n```\n\n## Don't\n- Use any npm dependencies — this is pure Node.js (fs, path, regex)\n- Include files outside pages/ (no AGENTS.md, no .beads/, no README)\n- Include the pages/index.md home page body (it's mostly card markup) — include title only\n- Modify any existing files except package.json","acceptance_criteria":"1. Running `node lib/generate-search-index.js` creates public/search-index.json\n2. The JSON file is a valid JSON array with one entry per .md page (~43 entries)\n3. Each entry has path, title, section, headings (array), body (string), and description (string or empty)\n4. Body text does not contain markdown syntax, Markdoc tags, code blocks, or frontmatter\n5. Body text is max 5000 chars per page\n6. `pnpm build` succeeds and includes the search index generation\n7. The search index file is under 500KB total","status":"closed","priority":1,"issue_type":"task","assignee":"sharfy-test.climateai.org","owner":"sharfy-test.climateai.org","estimated_minutes":45,"created_at":"2026-03-09T16:51:18.334027+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-09T17:05:01.575079+08:00","closed_at":"2026-03-09T17:05:01.575079+08:00","close_reason":"6417d68 Build-time search index generator implemented - extracts titles, headings, descriptions, and body text from all .md files","labels":["scope:small"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-7fu.1","depends_on_id":"hypercerts-atproto-documentation-7fu","type":"parent-child","created_at":"2026-03-09T16:51:18.336538+08:00","created_by":"sharfy-test.climateai.org"}]}
+{"id":"hypercerts-atproto-documentation-7fu.2","title":"Replace SearchDialog with FlexSearch-powered full-text search","description":"## Files\n- components/SearchDialog.js (modify — major rewrite)\n- package.json (modify — add flexsearch dependency)\n\n## What to do\nReplace the current title-only fuzzy search with FlexSearch-powered full-text search that queries the search index generated by task 7fu.1.\n\n### 1. Add FlexSearch dependency\n```bash\npnpm add flexsearch\n```\n\n### 2. Rewrite SearchDialog.js\n\n#### Index loading:\n- On first dialog open, fetch `/search-index.json` (it's in public/)\n- Store the raw data in a module-level variable (not state) so it persists across opens\n- Create a FlexSearch Document index with these fields:\n```js\nimport { Document } from 'flexsearch';\n\nconst index = new Document({\n document: {\n id: 'path',\n index: ['title', 'description', 'headings', 'body'],\n },\n tokenize: 'forward',\n});\n```\n- Add all entries from the JSON to the index\n- Show a loading state ('Loading...') while fetching on first open\n\n#### Search:\n- On query change (debounce 150ms), search the index:\n```js\nconst results = index.search(query, { limit: 20, enrich: true });\n```\n- FlexSearch Document returns results grouped by field. Merge and deduplicate by path, prioritizing: title matches first, then description, then headings, then body.\n- For each result, look up the full entry from the raw data to get title, path, section, description, body.\n\n#### Snippet generation:\n- For body matches, generate a context snippet: find the first occurrence of the query in the body text, extract ~120 chars around it, and wrap the matched term in a `\u003cmark\u003e` tag.\n- Helper function:\n```js\nfunction getSnippet(body, query, contextChars = 60) {\n const lower = body.toLowerCase();\n const idx = lower.indexOf(query.toLowerCase());\n if (idx === -1) return '';\n const start = Math.max(0, idx - contextChars);\n const end = Math.min(body.length, idx + query.length + contextChars);\n let snippet = '';\n if (start \u003e 0) snippet += '...';\n snippet += body.slice(start, end);\n if (end \u003c body.length) snippet += '...';\n return snippet;\n}\n```\n\n#### Result display:\nEach result item should show:\n- `.search-result-title` — page title (existing class)\n- `.search-result-snippet` — the context snippet with matched term highlighted (NEW class, see CSS task)\n- `.search-result-path` — the URL path (existing class)\n\nKeep the existing structure:\n- Empty state: Quick Links (unchanged)\n- Results: grouped by section (unchanged grouping logic)\n- No results: 'No results' message + Quick Links (unchanged)\n- Keyboard navigation: Arrow Up/Down, Enter, Escape (unchanged)\n\n#### Keep from current implementation:\n- `QUICK_LINK_PATHS` array and quick links logic\n- Keyboard navigation (arrow keys, enter, escape)\n- `useRouter` for navigation\n- Focus management (focus input on open)\n- Overlay click-to-close\n- All existing class names for styling compatibility\n\n### 3. Remove old code:\n- Remove the `fuzzyMatch` function\n- Remove the `flattenNavigation` import (no longer needed)\n\n## Don't\n- Change the search dialog's visual layout or structure — only the data source and matching logic\n- Remove keyboard navigation\n- Remove quick links\n- Change any CSS class names (add new ones, don't rename existing)\n- Import from lib/navigation.js — the search index JSON replaces it for search","acceptance_criteria":"1. Searching 'OAuth' returns results (pages that mention OAuth in their body/headings)\n2. Searching 'createRecord' returns results (appears in code examples in body text)\n3. Searching 'PDS' returns multiple results with context snippets showing where PDS appears\n4. Title matches appear before body-only matches in results\n5. Results are grouped by section (Get Started, Core Concepts, etc.)\n6. Keyboard navigation (arrow keys, enter, escape) still works\n7. Quick Links still appear when search is empty\n8. 'No results' message appears for nonsense queries\n9. pnpm build succeeds\n10. No console errors when opening/using search","status":"closed","priority":1,"issue_type":"task","assignee":"sharfy-test.climateai.org","owner":"sharfy-test.climateai.org","estimated_minutes":60,"created_at":"2026-03-09T16:51:46.000565+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-09T17:09:51.234016+08:00","closed_at":"2026-03-09T17:09:51.234016+08:00","close_reason":"7cb8d0e Replace SearchDialog with FlexSearch-powered full-text search","labels":["scope:small"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-7fu.2","depends_on_id":"hypercerts-atproto-documentation-7fu","type":"parent-child","created_at":"2026-03-09T16:51:46.001904+08:00","created_by":"sharfy-test.climateai.org"},{"issue_id":"hypercerts-atproto-documentation-7fu.2","depends_on_id":"hypercerts-atproto-documentation-7fu.1","type":"blocks","created_at":"2026-03-09T16:51:46.003258+08:00","created_by":"sharfy-test.climateai.org"}]}
+{"id":"hypercerts-atproto-documentation-7fu.3","title":"Add CSS for search result snippets and mark highlighting","description":"## Files\n- styles/globals.css (modify — add rules after line ~1665, the end of the SearchDialog section)\n\n## What to do\nAdd CSS rules for the new search result snippet element and the `\u003cmark\u003e` highlight used for matched terms.\n\n### Add these rules:\n```css\n.search-result-snippet {\n font-size: 12px;\n color: var(--color-text-secondary);\n line-height: 1.5;\n margin-top: 2px;\n overflow: hidden;\n display: -webkit-box;\n -webkit-line-clamp: 2;\n -webkit-box-orient: vertical;\n}\n\n.search-result-snippet mark {\n background: oklch(0.85 0.15 85);\n color: inherit;\n border-radius: 2px;\n padding: 0 2px;\n}\n\nhtml.dark .search-result-snippet mark {\n background: oklch(0.45 0.12 85);\n color: var(--color-text-primary);\n}\n```\n\n### Context:\n- `.search-result-snippet` sits between `.search-result-title` and `.search-result-path` in each result item\n- The `\u003cmark\u003e` tag wraps the matched search term within the snippet\n- The snippet is clamped to 2 lines to keep results compact\n- Light mode uses a warm yellow highlight; dark mode uses a muted amber\n\n## Don't\n- Modify any existing CSS rules\n- Change existing class names\n- Add rules outside the SearchDialog CSS section","acceptance_criteria":"1. .search-result-snippet rule exists in globals.css\n2. .search-result-snippet mark rule exists with background highlight color\n3. html.dark .search-result-snippet mark rule exists for dark mode\n4. Snippet text is clamped to 2 lines via -webkit-line-clamp\n5. pnpm build succeeds","status":"closed","priority":1,"issue_type":"task","assignee":"sharfy-test.climateai.org","owner":"sharfy-test.climateai.org","estimated_minutes":15,"created_at":"2026-03-09T16:51:58.988935+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-09T17:11:30.040681+08:00","closed_at":"2026-03-09T17:11:30.040681+08:00","close_reason":"9f49190 Add CSS for search result snippets and mark highlighting","labels":["scope:trivial"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-7fu.3","depends_on_id":"hypercerts-atproto-documentation-7fu","type":"parent-child","created_at":"2026-03-09T16:51:58.990183+08:00","created_by":"sharfy-test.climateai.org"},{"issue_id":"hypercerts-atproto-documentation-7fu.3","depends_on_id":"hypercerts-atproto-documentation-7fu.2","type":"blocks","created_at":"2026-03-09T16:51:58.991534+08:00","created_by":"sharfy-test.climateai.org"}]}
{"id":"hypercerts-atproto-documentation-buj","title":"Epic: Fix remaining factual errors — old NSIDs, PDS validation claim, certified.ink URL","description":"17 instances of old NSIDs (org.hypercerts.claim.attachment/evaluation/measurement should be org.hypercerts.context.*) across 5 pages, plus 1 incorrect PDS validation claim in data-flow-and-lifecycle.md line 38, plus 1 certified.ink → certified.app in cel-work-scopes.md line 196. Success: zero instances of old context-namespace NSIDs in non-lexicon pages, no claims that PDS validates against schemas, certified.app used everywhere.","status":"open","priority":1,"issue_type":"epic","owner":"sharfy-test.climateai.org","created_at":"2026-03-06T18:07:54.799315+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-06T18:07:54.799315+08:00","labels":["scope:medium"]}
{"id":"hypercerts-atproto-documentation-buj.1","title":"Fix old NSIDs in data-flow-and-lifecycle.md (3 instances) and PDS validation error","description":"## Files\n- pages/architecture/data-flow-and-lifecycle.md (modify)\n\n## What to do\nReplace 3 old NSIDs with correct context-namespace NSIDs:\n- Line 63: `org.hypercerts.claim.attachment` → `org.hypercerts.context.attachment`\n- Line 67: `org.hypercerts.claim.measurement` → `org.hypercerts.context.measurement`\n- Line 101: `org.hypercerts.claim.evaluation` → `org.hypercerts.context.evaluation`\n\nFix PDS validation error:\n- Line 38: Change 'The PDS validates the record against the lexicon schema.' to 'The PDS stores the record in the contributor'\\''s repository.' (ATProto PDS instances are schema-agnostic — they do NOT validate records against lexicon schemas. Validation happens at the indexer/app view layer.)\n\n## Don't\n- Change any other content on this page\n- Add explanations about why PDS doesn't validate — just fix the sentence\n- Touch the ASCII diagrams","acceptance_criteria":"1. Zero instances of org.hypercerts.claim.attachment, org.hypercerts.claim.measurement, or org.hypercerts.claim.evaluation in the file. 2. Line 38 no longer claims PDS validates against lexicon schemas. 3. The three correct NSIDs (org.hypercerts.context.attachment, org.hypercerts.context.measurement, org.hypercerts.context.evaluation) appear in the file. 4. npm run build succeeds.","status":"open","priority":1,"issue_type":"task","owner":"sharfy-test.climateai.org","estimated_minutes":15,"created_at":"2026-03-06T18:08:07.273568+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-06T18:08:07.273568+08:00","labels":["scope:trivial"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-buj.1","depends_on_id":"hypercerts-atproto-documentation-buj","type":"parent-child","created_at":"2026-03-06T18:08:07.274562+08:00","created_by":"sharfy-test.climateai.org"}]}
{"id":"hypercerts-atproto-documentation-buj.2","title":"Fix old NSIDs in hypercerts-core-data-model.md (3 instances)","description":"## Files\n- pages/core-concepts/hypercerts-core-data-model.md (modify)\n\n## What to do\nReplace 3 old NSIDs in the table at lines 52-54:\n- Line 52: `org.hypercerts.claim.attachment` → `org.hypercerts.context.attachment`\n- Line 53: `org.hypercerts.claim.measurement` → `org.hypercerts.context.measurement`\n- Line 54: `org.hypercerts.claim.evaluation` → `org.hypercerts.context.evaluation`\n\nThese are in the 'Records that attach to a hypercert' table, in the Lexicon column.\n\n## Don't\n- Change any other content on this page\n- Modify the table structure or other columns\n- Touch the contributor model section (it was recently rewritten and is correct)","acceptance_criteria":"1. Zero instances of org.hypercerts.claim.attachment, org.hypercerts.claim.measurement, or org.hypercerts.claim.evaluation in the file. 2. The three correct NSIDs (org.hypercerts.context.attachment, org.hypercerts.context.measurement, org.hypercerts.context.evaluation) appear in the Lexicon column. 3. npm run build succeeds.","status":"open","priority":1,"issue_type":"task","owner":"sharfy-test.climateai.org","estimated_minutes":10,"created_at":"2026-03-06T18:08:15.674869+08:00","created_by":"sharfy-test.climateai.org","updated_at":"2026-03-06T18:08:15.674869+08:00","labels":["scope:trivial"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-buj.2","depends_on_id":"hypercerts-atproto-documentation-buj","type":"parent-child","created_at":"2026-03-06T18:08:15.677058+08:00","created_by":"sharfy-test.climateai.org"}]}
@@ -163,3 +167,4 @@
{"id":"hypercerts-atproto-documentation-w96.5","title":"Fix wrong lexicon NSIDs in data-flow-and-lifecycle page","description":"## Files\n- pages/architecture/data-flow-and-lifecycle.md (modify)\n\n## What to do\nFix two incorrect NSIDs:\n\n### Fix 1: Line 59\nCurrently says: `org.hypercerts.claim.contributionDetails`\nChange to: `org.hypercerts.claim.contribution`\n\n### Fix 2: Line 79\nCurrently says: `org.hypercerts.claim.collection`\nChange to: `org.hypercerts.collection`\n\nThe actual lexicon IDs are:\n- org.hypercerts.claim.contribution (file: contribution.json)\n- org.hypercerts.collection (file: collection.json, no .claim. segment)\n\n## Dont\n- Do not change any surrounding text or page structure\n- Only change the two NSID strings","acceptance_criteria":"1. Line 59 (or equivalent) contains `org.hypercerts.claim.contribution` (not contributionDetails)\n2. Line 79 (or equivalent) contains `org.hypercerts.collection` (not org.hypercerts.claim.collection)\n3. No other content is changed\n4. File parses as valid Markdown","status":"closed","priority":1,"issue_type":"task","assignee":"karma.gainforest.id","owner":"karma.gainforest.id","estimated_minutes":10,"created_at":"2026-03-05T19:58:14.602853024+06:00","created_by":"karma.gainforest.id","updated_at":"2026-03-05T20:04:27.406244913+06:00","closed_at":"2026-03-05T20:04:27.406244913+06:00","close_reason":"91f3ecd Fix wrong lexicon NSIDs in data-flow-and-lifecycle page","labels":["scope:trivial"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-w96.5","depends_on_id":"hypercerts-atproto-documentation-w96","type":"parent-child","created_at":"2026-03-05T19:58:14.60526901+06:00","created_by":"karma.gainforest.id"}]}
{"id":"hypercerts-atproto-documentation-w96.6","title":"Fix wrong collection NSID in roadmap page","description":"## Files\n- pages/roadmap.md (modify)\n\n## What to do\nFix incorrect collection NSID on line 69.\n\nCurrently says: `org.hypercerts.claim.collection`\nChange to: `org.hypercerts.collection`\n\nThe actual lexicon ID is org.hypercerts.collection (no .claim. segment).\n\n## Dont\n- Do not change any other content on the roadmap page\n- Only change the one NSID string","acceptance_criteria":"1. Line 69 (or the collection row in the table) shows `org.hypercerts.collection` (not org.hypercerts.claim.collection)\n2. No other content is changed\n3. File parses as valid Markdown","status":"closed","priority":1,"issue_type":"task","assignee":"karma.gainforest.id","owner":"karma.gainforest.id","estimated_minutes":5,"created_at":"2026-03-05T19:58:19.75432948+06:00","created_by":"karma.gainforest.id","updated_at":"2026-03-05T20:04:11.56608607+06:00","closed_at":"2026-03-05T20:04:11.56608607+06:00","close_reason":"3352ec1 Fix wrong collection NSID in roadmap page","labels":["scope:trivial"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-w96.6","depends_on_id":"hypercerts-atproto-documentation-w96","type":"parent-child","created_at":"2026-03-05T19:58:19.756668539+06:00","created_by":"karma.gainforest.id"}]}
{"id":"hypercerts-atproto-documentation-w96.7","title":"Rewrite contributor model description and add missing record types in core data model page","description":"## Files\n- pages/core-concepts/hypercerts-core-data-model.md (modify)\n\n## What to do\nThe core data model page has structural inaccuracies about how contributors work. Fix the contributor model description and the \"How records connect\" tree.\n\n### 1. Fix the \"Additional details\" section (lines 26-33)\nThe current description says ContributorInformation and ContributionDetails are \"separate records with their own AT-URI\" that \"can be referenced from the activity claim.\" This is misleading.\n\nRewrite to explain the actual model: The activity claim has a `contributors` array where each entry is a contributor object containing:\n- `contributorIdentity`: either an inline identity string (DID) via `#contributorIdentity`, OR a strong reference to an `org.hypercerts.claim.contributorInformation` record\n- `contributionWeight`: optional relative weight string\n- `contributionDetails`: either an inline role string via `#contributorRole`, OR a strong reference to an `org.hypercerts.claim.contribution` record\n\nEmphasize the dual inline/reference pattern — simple cases use inline strings, richer profiles use separate records.\n\nUpdate the table to reflect the correct lexicon name `org.hypercerts.claim.contribution` (not contributionDetails).\n\n### 2. Fix the \"How records connect\" tree (lines 70-82)\nThe tree currently shows ContributorInformation and ContributionDetails as separate child records. Update it to show them as embedded within contributor objects, reflecting the actual structure. Example:\n\n```text\nActivity Claim (the core record)\n├── contributors[0]\n│ ├── contributorIdentity: Alice (inline DID or ref to ContributorInformation)\n│ ├── contributionWeight: \"1\"\n│ └── contributionDetails: Lead author (inline role or ref to Contribution)\n├── contributors[1]\n│ ├── contributorIdentity: → ContributorInformation record (Bob)\n│ └── contributionDetails: → Contribution record (Technical reviewer, Jan-Mar)\n├── Attachment: GitHub repository link\n├── Measurement: 12 pages written\n├── Measurement: 8,500 words\n└── Evaluation: \"High-quality documentation\" (by Carol)\n```\n\n## Dont\n- Do not change the \"The core record: activity claim\" section (lines 12-23) — the four dimensions table is correct\n- Do not change the \"Grouping hypercerts\" section\n- Do not change the \"Mutability\" or \"What happens next\" sections\n- Do not add new record types (rights, acknowledgement, funding receipt) — this task is fixes only\n- Do not add code examples or API usage — this is a conceptual page\n- Keep the writing style consistent with the rest of the page (concise, factual, no marketing language)","acceptance_criteria":"1. The \"Additional details\" section accurately describes the dual inline/reference pattern for contributors\n2. The contributor table shows `org.hypercerts.claim.contribution` (not contributionDetails)\n3. The \"How records connect\" tree shows contributors as embedded objects with inline/reference options\n4. The four dimensions table in \"The core record\" section is unchanged\n5. The \"Grouping hypercerts\", \"Mutability\", and \"What happens next\" sections are unchanged\n6. No new record types are added to the page\n7. File parses as valid Markdown","status":"closed","priority":2,"issue_type":"task","assignee":"karma.gainforest.id","owner":"karma.gainforest.id","estimated_minutes":45,"created_at":"2026-03-05T19:58:48.627566144+06:00","created_by":"karma.gainforest.id","updated_at":"2026-03-05T20:05:12.194006503+06:00","closed_at":"2026-03-05T20:05:12.194006503+06:00","close_reason":"e55324a Rewrite contributor model description and fix records tree","labels":["scope:small"],"dependencies":[{"issue_id":"hypercerts-atproto-documentation-w96.7","depends_on_id":"hypercerts-atproto-documentation-w96","type":"parent-child","created_at":"2026-03-05T19:58:48.630865221+06:00","created_by":"karma.gainforest.id"},{"issue_id":"hypercerts-atproto-documentation-w96.7","depends_on_id":"hypercerts-atproto-documentation-w96.4","type":"blocks","created_at":"2026-03-05T19:58:48.635719127+06:00","created_by":"karma.gainforest.id"}]}
+{"id":"hypercerts-atproto-documentation-woj","title":"Epic: Resolve CodeRabbit review for PR #78","description":"Resolve all open CodeRabbit inline review comments on PR #78. 3 comments across 1 file (pages/tools/scaffold.md).","status":"closed","priority":1,"issue_type":"epic","owner":"kzoepa@gmail.com","created_at":"2026-03-05T15:57:13.054348747+06:00","created_by":"kzoeps","updated_at":"2026-03-06T18:07:23.476296+08:00","closed_at":"2026-03-06T18:07:23.476296+08:00","close_reason":"4e902e4 Both children closed — CodeRabbit review items resolved","labels":["needs-integration-review","scope:small"]}
diff --git a/components/SearchDialog.js b/components/SearchDialog.js
index cabe80c..b451eff 100644
--- a/components/SearchDialog.js
+++ b/components/SearchDialog.js
@@ -1,16 +1,24 @@
import { useState, useEffect, useRef, useCallback } from 'react';
import { useRouter } from 'next/router';
-import { flattenNavigation } from '../lib/navigation';
-
-// Fuzzy match: all characters of query appear in text in order (case-insensitive)
-function fuzzyMatch(query, text) {
- const q = query.toLowerCase();
- const t = text.toLowerCase();
- let qi = 0;
- for (let ti = 0; ti < t.length && qi < q.length; ti++) {
- if (t[ti] === q[qi]) qi++;
- }
- return qi === q.length;
+import FlexSearch from 'flexsearch';
+
+// Module-level variables to persist across dialog opens
+let searchData = null;
+let searchIndex = null;
+let isLoading = false;
+
+// Generate a context snippet with the matched term highlighted
+function getSnippet(body, query, contextChars = 60) {
+ const lower = body.toLowerCase();
+ const idx = lower.indexOf(query.toLowerCase());
+ if (idx === -1) return '';
+ const start = Math.max(0, idx - contextChars);
+ const end = Math.min(body.length, idx + query.length + contextChars);
+ let snippet = '';
+ if (start > 0) snippet += '...';
+ snippet += body.slice(start, end);
+ if (end < body.length) snippet += '...';
+ return snippet;
}
// Paths for the curated quick links shown in the empty state
@@ -26,31 +34,119 @@ const QUICK_LINK_PATHS = [
export function SearchDialog({ isOpen, onClose }) {
const [query, setQuery] = useState('');
const [selectedIndex, setSelectedIndex] = useState(-1);
+ const [loading, setLoading] = useState(false);
+ const [results, setResults] = useState([]);
const inputRef = useRef(null);
const itemRefs = useRef([]);
const router = useRouter();
- const allPages = flattenNavigation();
+ const debounceTimerRef = useRef(null);
- // Quick links for empty state
- const quickLinks = QUICK_LINK_PATHS
- .map(p => allPages.find(page => page.path === p))
- .filter(Boolean);
-
- // Filter and sort results using fuzzy matching
- const trimmedQuery = query.trim();
- const results = trimmedQuery.length > 0
- ? allPages
- .filter(p => fuzzyMatch(trimmedQuery, p.title))
- .sort((a, b) => {
- const aExact = a.title.toLowerCase().includes(trimmedQuery.toLowerCase());
- const bExact = b.title.toLowerCase().includes(trimmedQuery.toLowerCase());
- if (aExact && !bExact) return -1;
- if (!aExact && bExact) return 1;
- return 0;
+ // Load search index on first open
+ useEffect(() => {
+ if (isOpen && !searchData && !isLoading) {
+ isLoading = true;
+ setLoading(true);
+ fetch('/search-index.json')
+ .then(res => res.json())
+ .then(data => {
+ searchData = data;
+ // Create FlexSearch index
+ searchIndex = new FlexSearch.Document({
+ document: {
+ id: 'path',
+ index: ['title', 'description', 'headings', 'body'],
+ },
+ tokenize: 'forward',
+ });
+ // Add all entries to the index — join headings array into string for FlexSearch
+ data.forEach(entry => {
+ searchIndex.add({
+ ...entry,
+ headings: Array.isArray(entry.headings) ? entry.headings.join(' ') : (entry.headings || ''),
+ });
+ });
+ setLoading(false);
+ isLoading = false;
})
+ .catch(err => {
+ console.error('Failed to load search index:', err);
+ setLoading(false);
+ isLoading = false;
+ });
+ }
+ }, [isOpen]);
+
+ // Quick links for empty state
+ const quickLinks = searchData
+ ? QUICK_LINK_PATHS
+ .map(p => searchData.find(page => page.path === p))
+ .filter(Boolean)
: [];
+ // Perform search with debouncing
+ useEffect(() => {
+ const trimmedQuery = query.trim();
+
+ if (debounceTimerRef.current) {
+ clearTimeout(debounceTimerRef.current);
+ }
+
+ if (!trimmedQuery || !searchIndex || !searchData) {
+ setResults([]);
+ return;
+ }
+
+ debounceTimerRef.current = setTimeout(() => {
+ // Search the index
+ const searchResults = searchIndex.search(trimmedQuery, { limit: 20, enrich: true });
+
+ // Merge and deduplicate results by path, prioritizing field order
+ const pathMap = new Map();
+ const fieldPriority = { title: 1, description: 2, headings: 3, body: 4 };
+
+ searchResults.forEach(fieldResult => {
+ const field = fieldResult.field;
+ const priority = fieldPriority[field] || 999;
+
+ fieldResult.result.forEach(item => {
+ const path = typeof item === 'object' ? item.id : item;
+ if (!pathMap.has(path) || pathMap.get(path).priority > priority) {
+ const entry = searchData.find(e => e.path === path);
+ if (entry) {
+ pathMap.set(path, {
+ ...entry,
+ priority,
+ matchedField: field,
+ });
+ }
+ }
+ });
+ });
+
+ // Convert to array and sort by priority
+ const mergedResults = Array.from(pathMap.values()).sort((a, b) => a.priority - b.priority);
+
+ // Generate snippets for body matches
+ mergedResults.forEach(result => {
+ if (result.matchedField === 'body' && result.body) {
+ result.snippet = getSnippet(result.body, trimmedQuery);
+ } else if (result.description) {
+ result.snippet = result.description;
+ }
+ });
+
+ setResults(mergedResults);
+ }, 150);
+
+ return () => {
+ if (debounceTimerRef.current) {
+ clearTimeout(debounceTimerRef.current);
+ }
+ };
+ }, [query]);
+
// Group results by section
+ const trimmedQuery = query.trim();
const groupedResults = results.reduce((acc, page) => {
const section = page.section || 'General';
if (!acc[section]) acc[section] = [];
@@ -107,6 +203,27 @@ export function SearchDialog({ isOpen, onClose }) {
// Build a flat index counter for refs across groups
let refIndex = 0;
+ // Helper to highlight matched term in snippet
+ const highlightSnippet = (snippet, query) => {
+ if (!snippet || !query) return snippet;
+ const lower = snippet.toLowerCase();
+ const queryLower = query.toLowerCase();
+ const idx = lower.indexOf(queryLower);
+ if (idx === -1) return snippet;
+
+ const before = snippet.slice(0, idx);
+ const match = snippet.slice(idx, idx + query.length);
+ const after = snippet.slice(idx + query.length);
+
+ return (
+ <>
+ {before}
+ {match}
+ {after}
+ >
+ );
+ };
+
return (
e.stopPropagation()}>
@@ -137,7 +254,10 @@ export function SearchDialog({ isOpen, onClose }) {
ESC
- {!hasQuery ? (
+ {loading ? (
+ /* Loading state */
+
Loading...
+ ) : !hasQuery ? (
/* Empty state: show Quick Links */
-
@@ -178,6 +298,11 @@ export function SearchDialog({ isOpen, onClose }) {
type="button"
>
{page.title}
+ {page.snippet && (
+
+ {highlightSnippet(page.snippet, trimmedQuery)}
+
+ )}
{page.path}
diff --git a/lib/generate-search-index.js b/lib/generate-search-index.js
new file mode 100644
index 0000000..f78fcf1
--- /dev/null
+++ b/lib/generate-search-index.js
@@ -0,0 +1,139 @@
+const { readdirSync, statSync, readFileSync, writeFileSync } = require("fs");
+const { join, relative } = require("path");
+
+const PAGES_DIR = join(__dirname, "..", "pages");
+const OUTPUT = join(__dirname, "..", "public", "search-index.json");
+const MAX_BODY_LENGTH = 5000;
+
+function walkDir(dir) {
+ const results = [];
+ for (const entry of readdirSync(dir)) {
+ const full = join(dir, entry);
+ if (statSync(full).isDirectory()) {
+ results.push(...walkDir(full));
+ } else if (full.endsWith(".md")) {
+ results.push(full);
+ }
+ }
+ return results;
+}
+
+function extractFrontmatter(content) {
+ const fmMatch = content.match(/^---\n([\s\S]*?)\n---/);
+ if (!fmMatch) return { title: "", description: "" };
+
+ const frontmatter = fmMatch[1];
+ const titleMatch = frontmatter.match(/^title:\s*(.+)$/m);
+ const descMatch = frontmatter.match(/^description:\s*(.+)$/m);
+
+ return {
+ title: titleMatch ? titleMatch[1].trim() : "",
+ description: descMatch ? descMatch[1].trim() : "",
+ };
+}
+
+function extractHeadings(content) {
+ const headings = [];
+ const lines = content.split("\n");
+
+ for (const line of lines) {
+ // Match h2 (## ) or h3 (### )
+ const h2Match = line.match(/^##\s+(.+)$/);
+ const h3Match = line.match(/^###\s+(.+)$/);
+
+ if (h2Match) {
+ headings.push(h2Match[1].trim());
+ } else if (h3Match) {
+ headings.push(h3Match[1].trim());
+ }
+ }
+
+ return headings;
+}
+
+function stripMarkdown(content) {
+ let text = content;
+
+ // Remove frontmatter
+ text = text.replace(/^---\n[\s\S]*?\n---\n?/, "");
+
+ // Remove code blocks
+ text = text.replace(/```[\s\S]*?```/g, "");
+
+ // Remove Markdoc tags ({% ... %} and {% /... %})
+ text = text.replace(/\{%[\s\S]*?%\}/g, "");
+
+ // Remove HTML tags
+ text = text.replace(/<[^>]+>/g, "");
+
+ // Remove heading markers
+ text = text.replace(/^#{1,6}\s+/gm, "");
+
+ // Remove markdown links [text](url) -> text
+ text = text.replace(/\[([^\]]+)\]\([^\)]+\)/g, "$1");
+
+ // Remove bold/italic markers
+ text = text.replace(/\*\*([^*]+)\*\*/g, "$1");
+ text = text.replace(/\*([^*]+)\*/g, "$1");
+ text = text.replace(/__([^_]+)__/g, "$1");
+ text = text.replace(/_([^_]+)_/g, "$1");
+
+ // Remove inline code backticks
+ text = text.replace(/`([^`]+)`/g, "$1");
+
+ // Collapse whitespace
+ text = text.replace(/\s+/g, " ");
+
+ return text.trim();
+}
+
+function getSection(path) {
+ if (path === "/") return "Get Started";
+ if (path.startsWith("/getting-started")) return "Get Started";
+ if (path.startsWith("/core-concepts")) return "Core Concepts";
+ if (path.startsWith("/tools")) return "Tools";
+ if (path.startsWith("/architecture")) return "Architecture";
+ if (path.startsWith("/lexicons")) return "Reference";
+ if (path.startsWith("/reference")) return "Reference";
+ if (path.startsWith("/ecosystem")) return "Ecosystem & Vision";
+ if (path === "/roadmap") return "Reference";
+ return "Other";
+}
+
+const files = walkDir(PAGES_DIR);
+const index = [];
+
+for (const file of files) {
+ const content = readFileSync(file, "utf-8");
+ const rel = "/" + relative(PAGES_DIR, file).replace(/\.md$/, "");
+ const path = rel === "/index" ? "/" : rel;
+
+ const { title, description } = extractFrontmatter(content);
+ const headings = extractHeadings(content);
+ const section = getSection(path);
+
+ // For the home page, only include title (body is mostly card markup)
+ let body = "";
+ if (path !== "/") {
+ body = stripMarkdown(content);
+ if (body.length > MAX_BODY_LENGTH) {
+ body = body.substring(0, MAX_BODY_LENGTH);
+ }
+ }
+
+ index.push({
+ path,
+ title,
+ description: description || "",
+ section,
+ headings,
+ body,
+ });
+}
+
+writeFileSync(OUTPUT, JSON.stringify(index, null, 2) + "\n");
+console.log(
+ `Generated search index for ${index.length} pages (${
+ Buffer.byteLength(JSON.stringify(index)) / 1024
+ } KB)`
+);
diff --git a/lib/lastUpdated.json b/lib/lastUpdated.json
index 917fade..1f4f60f 100644
--- a/lib/lastUpdated.json
+++ b/lib/lastUpdated.json
@@ -28,7 +28,7 @@
"/lexicons/hypercerts-lexicons/activity-claim": "2026-03-05T19:47:55+08:00",
"/lexicons/hypercerts-lexicons/attachment": "2026-03-05T19:50:36+08:00",
"/lexicons/hypercerts-lexicons/collection": "2026-03-05T20:17:36+06:00",
- "/lexicons/hypercerts-lexicons/contribution": "2026-03-09T15:46:09+08:00",
+ "/lexicons/hypercerts-lexicons/contribution": "2026-03-09T15:56:42+08:00",
"/lexicons/hypercerts-lexicons/evaluation": "2026-03-05T19:51:39+08:00",
"/lexicons/hypercerts-lexicons/funding-receipt": "2026-03-05T19:47:55+08:00",
"/lexicons/hypercerts-lexicons/index": "2026-03-05T20:17:36+06:00",
@@ -38,7 +38,7 @@
"/reference/faq": "2026-03-06T22:56:46+08:00",
"/reference/glossary": "2026-03-05T20:14:26+08:00",
"/roadmap": "2026-03-05T20:17:36+06:00",
- "/tools/hyperboards": "2026-03-05T17:11:22+06:00",
+ "/tools/hyperboards": "2026-03-09T15:56:42+08:00",
"/tools/hypercerts-cli": "2026-03-05T12:30:00+06:00",
"/tools/hyperindex": "2026-02-20T19:42:03+08:00",
"/tools/scaffold": "2026-03-05T15:58:36+06:00"
diff --git a/package.json b/package.json
index 16f3f6d..b9f1e07 100644
--- a/package.json
+++ b/package.json
@@ -4,14 +4,15 @@
"private": true,
"description": "Hypercerts Protocol Documentation",
"scripts": {
- "dev": "node lib/generate-last-updated.js && next dev --webpack",
- "build": "node lib/generate-last-updated.js && next build --webpack",
+ "dev": "node lib/generate-search-index.js && node lib/generate-last-updated.js && next dev --webpack",
+ "build": "node lib/generate-search-index.js && node lib/generate-last-updated.js && next build --webpack",
"start": "next start"
},
"dependencies": {
"@markdoc/markdoc": "^0.5.4",
"@markdoc/next.js": "^0.5.0",
"@vercel/analytics": "^1.6.1",
+ "flexsearch": "^0.8.212",
"next": "^16.1.6",
"prism-react-renderer": "^2.4.1",
"react": "^19.2.4",
diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml
index d8105d7..c803376 100644
--- a/pnpm-lock.yaml
+++ b/pnpm-lock.yaml
@@ -17,6 +17,9 @@ importers:
'@vercel/analytics':
specifier: ^1.6.1
version: 1.6.1(next@16.1.6(react-dom@19.2.4(react@19.2.4))(react@19.2.4))(react@19.2.4)
+ flexsearch:
+ specifier: ^0.8.212
+ version: 0.8.212
next:
specifier: ^16.1.6
version: 16.1.6(react-dom@19.2.4(react@19.2.4))(react@19.2.4)
@@ -305,6 +308,9 @@ packages:
resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==}
engines: {node: '>=8'}
+ flexsearch@0.8.212:
+ resolution: {integrity: sha512-wSyJr1GUWoOOIISRu+X2IXiOcVfg9qqBRyCPRUdLMIGJqPzMo+jMRlvE83t14v1j0dRMEaBbER/adQjp6Du2pw==}
+
js-yaml@4.1.1:
resolution: {integrity: sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==}
hasBin: true
@@ -567,6 +573,8 @@ snapshots:
detect-libc@2.1.2:
optional: true
+ flexsearch@0.8.212: {}
+
js-yaml@4.1.1:
dependencies:
argparse: 2.0.1
diff --git a/public/search-index.json b/public/search-index.json
new file mode 100644
index 0000000..4a22a3b
--- /dev/null
+++ b/public/search-index.json
@@ -0,0 +1,601 @@
+[
+ {
+ "path": "/architecture/account-and-identity",
+ "title": "Account & Identity Setup",
+ "description": "Understand your identity, configure custom domains, and manage credentials.",
+ "section": "Architecture",
+ "headings": [
+ "Create an account",
+ "Why Certified?",
+ "Your DID",
+ "Handles (your public username) and domain verification",
+ "Organization accounts",
+ "Authentication",
+ "OAuth (for applications)",
+ "OAuth (for ePDS)",
+ "App passwords (for scripts and CLI)",
+ "The app.certified namespace",
+ "Next steps"
+ ],
+ "body": "Account & Identity Setup If you followed the Quickstart, you already have an account. This page explains what that account gives you and how to configure it — custom domain handles for organizations, app passwords for scripts, shared repositories for teams, and account recovery. --- Create an account Sign up at certified.app. You'll get: - Low-friction sign-in — Sign in with just your email and a code. No passwords or protocol knowledge required. - A DID — Your permanent, portable identifier (e.g., did:plc:z72i7hdynmk6r22z27h6tvur). It never changes, even if you switch servers or handles. - A Repository — Your own collection on the certified PDS, where your hypercerts, evaluations, and other records are stored. You own this data and you can migrate it between servers. - An (embedded) wallet — Add your existing EVM wallet or get a new one. - Ecosystem access — Your identity works across every Hypercerts application. Why Certified? The Hypercerts Protocol is built on AT Protocol — the same decentralized data layer that powers Bluesky. But most Hypercerts users are not Bluesky users. They are researchers, land stewards, open-source maintainers, funders, and evaluators. Asking them to \"sign in with Bluesky\" to use a funding platform would be confusing — it ties a funding tool to a social media brand. This is no knock on Bluesky — it's a great platform, just not the right entry point for a funding tool. Certified is a neutral identity provider that isn't tied to any single application. You create an account and immediately have an identity that works across the entire ecosystem — no knowledge of Bluesky, ATProto, or decentralized protocols required. Hypercerts is fully interoperable with the AT Protocol ecosystem. If you already have a Bluesky account or any other ATProto identity, log in with your existing handle (e.g., alice.bsky.social) and use all Hypercerts applications — no additional account needed. --- Your DID Your DID is your permanent identity. It looks like did:plc:z72i7hdynmk6r22z27h6tvur and is resolved via the PLC directory, which maps it to your current PDS, public keys, and handle. Every record you create carries your DID as the author. If you change PDS providers, your DID stays the same — other applications continue to recognize you and your data migrates with you. --- Handles (your public username) and domain verification Handles are not needed to log in to the Hypercerts ecosystem, but every user has one. They serve as human-readable names for publicly addressing others and for interacting with other applications in the AT Protocol ecosystem that haven't implemented email-based login with Certified. Your handle is a human-readable name like alice.certified.app. Unlike your DID, your handle can change — it's a pointer to your DID, not your identity itself. Organizations should use custom domain handles. A handle like numpy.org proves organizational identity — anyone can verify that the DID behind numpy.org is controlled by whoever controls the domain. To set up a custom handle, add a DNS TXT record or host a file at https://your-domain.com/.well-known/atproto-did. See the AT Protocol handle documentation for details. If you sign up using your email on certified.app you will initially be given a random handle like 1lasdk.certified.app. You can change your handle by going to your profile settings and clicking on \"Change handle\" on certified.app. --- Organization accounts For teams with multiple contributors, create a dedicated organizational account on a PDS. The organization gets its own DID and repository. Team members can write to the organization's repository using app passwords or OAuth scoped to the organizational account. This is useful for open-source projects, research labs, and organizations where many people contribute to the same body of work. To set up an organizational account, create an account at certified.app with the organization's email. Use a custom domain handle (e.g., numpy.org) to prove organizational identity. --- Authentication OAuth (for applications) Applications authenticate users via AT Protocol OAuth. The AT Protocol client libraries handle the full OAuth flow — authorization, token management, and session restoration. Users authorize your app through their PDS and never share credentials with your application. See the Quickstart for the authentication setup. OAuth (for ePDS) The ePDS (extended PDS) adds email/passwordless login on top of the standard PDS, without modifying the underlying AT Protocol PDS code. When a user authenticates, the ePDS Auth Service handles the OTP flow and then issues a standard AT Protocol authorization code back to your app. Because of this architecture, the standard ATProto OAuth client libraries won't work with ePDS — you'll need to implement PAR, DPoP, and PKCE directly. @atproto/oauth-client-node does not handle ePDS's email/OTP flow and cannot be used as-is. The ePDS demo is a Next.js app that implements the above and shows a compl"
+ },
+ {
+ "path": "/architecture/data-flow-and-lifecycle",
+ "title": "Data Flow & Lifecycle",
+ "description": "How a hypercert moves from creation through evaluation to funding.",
+ "section": "Architecture",
+ "headings": [
+ "The Lifecycle of a Hypercert",
+ "Stage 1: Creation",
+ "Stage 2: Enrichment",
+ "Stage 3: Evaluation",
+ "Stage 4: Discovery & Indexing",
+ "Stage 5: Funding & Ownership (Planned)",
+ "Stage 6: Accumulation",
+ "Cross-PDS References",
+ "What This Flow Enables",
+ "Next Steps"
+ ],
+ "body": "Data Flow & Lifecycle A hypercert flows through six stages from creation to ongoing accumulation of attachments and funding. The Lifecycle of a Hypercert Every hypercert follows a similar path through the system, though the timeline and participants vary. Creation happens when a contributor writes an activity claim to their Personal Data Server. The claim gets a unique identifier and becomes part of the contributor's repository. Enrichment adds supporting data. Contribution records link collaborators. Attachment records attach proof of work. Measurement records provide quantitative data. Rights records define the terms of the claim. Collection records group claims into projects. These can live on the same server or different servers. Evaluation brings third-party assessment. Evaluators create evaluation records on their own servers that reference the original claim. Multiple evaluators can independently assess the same work. Evaluations accumulate over time. Discovery makes the hypercert findable. Relays aggregate records from many servers. Indexers build searchable databases. Platforms query indexers to surface hypercerts to users. Funding connects funders to the claim. Funding receipts (org.hypercerts.funding.receipt) record who funded what, how much, and when — this works today on AT Protocol. Optionally, the claim can be frozen and anchored on-chain for tokenized funding. The on-chain tokenization layer is planned but not yet implemented. Accumulation continues indefinitely. More evaluations arrive. Additional attachments get attached. The data layer continues evolving. *On-chain layer is planned. See Funding & Value Flow. Stage 1: Creation A hypercert begins when a contributor creates an activity claim on their PDS. The contributor writes an org.hypercerts.claim.activity record. This record includes fields like workScope, startDate, endDate, and contributors. The PDS validates the record against the lexicon schema. The record receives a unique AT-URI. The format is at://did:plc:abc123/org.hypercerts.claim.activity/tid where did:plc:abc123 is the contributor's DID and tid is a timestamp-based identifier. The PDS signs the record and includes it in the contributor's repository. The signature proves the contributor created this record at this time. The repository is a Merkle tree that provides tamper-evidence. The relay picks up the new record. Within seconds, the record is available to downstream consumers. Indexers can now discover and process it. Stage 2: Enrichment Supporting data gets attached through additional records that reference the activity claim. Contribution Records Contributors are embedded in the activity claim's contributors array. For richer profiles, separate org.hypercerts.claim.contributorInformation and org.hypercerts.claim.contribution records can be created and referenced. Contributions can be created by the original contributor or by collaborators on their own servers. Attachment Records org.hypercerts.claim.attachment records attach supporting documentation. Attachments can be URLs, file uploads, or structured data. Each attachment record includes a strong reference to the claim it supports. Measurement Records org.hypercerts.claim.measurement records provide quantitative data. A measurement specifies what was measured, the value, the unit, and the methodology. Multiple measurements can track different metrics. Rights Records org.hypercerts.claim.rights records define the rights associated with a hypercert — for example, public display rights, attribution licenses, or transferability terms. The activity claim references a rights record via a strong reference. Rights are defined using a name, type, and description, with an optional attached legal document. Location Records app.certified.location records anchor work geographically. A location record can specify a point, a region, or multiple areas. This enables geographic filtering and regional funding mechanisms. Collection Records org.hypercerts.collection records group multiple activity claims into a project or portfolio. Each collection has weighted items (strong references to activity claims or other collections), supporting recursive nesting. A collection with type=\"project\" represents a multi-year project composed of individual activity claims. Funding Receipts org.hypercerts.funding.receipt records track funding events. A receipt records who funded which activity, how much, through which payment rail, and when. Receipts reference the activity claim they fund. See Funding & Value Flow for details. Cross-Server References These records can live on different PDS instances. A contributor on PDS-A can create an activity claim. A collaborator on PDS-B can create a contribution record that references it. A collaborator on PDS-C can create an attachment. Strong references ensure the connections are tamper-evident. Stage 3: Evaluation Third parties assess the work by creating evaluation records on their own servers. An evaluator cre"
+ },
+ {
+ "path": "/architecture/indexers-and-discovery",
+ "title": "Indexers & Discovery",
+ "description": "How indexers make hypercerts findable across the network.",
+ "section": "Architecture",
+ "headings": [
+ "Why indexers?",
+ "Hyperindex"
+ ],
+ "body": "Why indexers? ATProto is federated — data is distributed across multiple PDSs. To discover records, you need something that crawls the network and aggregates data into a queryable database. That's what indexers do: they subscribe to the firehose (a real-time stream of repository commits), fetch and parse records according to their lexicons, and expose the results via APIs for searching, filtering, and aggregation. Hyperindex Hyperindex is a reference indexer for the Hypercerts ecosystem. It listens to the network via Jetstream, stores matching records in a database, and automatically generates a GraphQL API for querying them. Multiple indexers are running across the ecosystem — see Hyperscan Indexers for a live list with health status. You can also run your own instance and register custom lexicons alongside the standard org.hypercerts.* ones. See the Hyperindex repository for setup instructions. For how indexers fit into the protocol stack, see Data Flow & Lifecycle."
+ },
+ {
+ "path": "/architecture/overview",
+ "title": "Architecture Overview",
+ "description": "How the Hypercerts Protocol stack fits together.",
+ "section": "Architecture",
+ "headings": [
+ "Three Layers",
+ "How Data Flows",
+ "Why It's Trustworthy",
+ "Why This Architecture"
+ ],
+ "body": "Architecture Overview The Hypercerts Protocol is built on AT Protocol — the same decentralized data layer that powers Bluesky. Users own their data, applications are interoperable, and records are cryptographically signed. Three Layers !The Hypercerts Stack Data. All hypercert records — activity claims, contributions, attachments, evaluations, measurements — live on AT Protocol. Each user's data is stored on a Personal Data Server (PDS) they control. Shared schemas called lexicons define the structure of every record type, so any application can read any record. Users can switch PDS providers without losing data. See Account & Identity and Portability & Data Access. Applications. Funding platforms, dashboards, and evaluation tools read from and write to the data layer. They query indexers that aggregate records from across the network into searchable databases. Different indexers can build different views of the same underlying data. Ownership (planned). On-chain anchoring for funding and tokenization is not yet implemented. The intended design freezes a hypercert's ATProto records and anchors them on-chain before funding — so funders know exactly what they're paying for. See Funding & Value Flow. How Data Flows A user writes a record to their PDS. The PDS signs it and adds it to the user's repository. A relay picks up the new record and streams it to indexers. Indexers update their databases. Applications query indexers to display the data. !Data flow through ATProto For the full lifecycle — creation, enrichment, evaluation, discovery, funding, and accumulation — see Data Flow & Lifecycle. Why It's Trustworthy Every PDS repository is a Merkle tree signed by the user's DID key — any third-party tampering is detectable because the signature won't match. When one record references another (e.g., an evaluation pointing to an activity claim), the reference includes a content hash, so you can tell if the target was modified after the reference was created. DIDs resolve via the PLC directory to a public key, so signatures are independently verifiable. This means: Alice creates a hypercert and her PDS signs it. Bob evaluates it, referencing Alice's record by URI and content hash. Anyone can verify Alice authored the claim, Bob authored the evaluation, and Bob evaluated the exact version Alice published. Why This Architecture Why not fully on-chain? Storing rich data on-chain is expensive — a single activity claim with attachments could cost hundreds of dollars in gas. On-chain works for ownership state, not for the full data layer. Why not fully off-chain? Funding requires immutability. Without on-chain anchoring, there's no way to guarantee that what a funder evaluated is what they end up funding. Why ATProto? Persistent DIDs, shared schemas, user-controlled data, and a growing ecosystem. IPFS has no identity layer or schemas. Ceramic has similar goals but a smaller ecosystem. Bluesky demonstrates ATProto scales to millions of users. See Why AT Protocol?."
+ },
+ {
+ "path": "/architecture/portability-and-scaling",
+ "title": "Portability & Data Access",
+ "description": "How ATProto enables data migration, app switching, and transparent data access.",
+ "section": "Architecture",
+ "headings": [
+ "Public by default",
+ "Switching PDSs",
+ "Interoperable data"
+ ],
+ "body": "Public by default ATProto records are public by default. Anyone can read your activity claims, evaluations, and contributions. This is intentional — impact work benefits from transparency and discoverability. Switching PDSs Users can migrate their entire repository to a new PDS without breaking references. Export from the old PDS, import to the new one, and update your DID document to point to the new URL. Applications automatically follow. This works because AT-URIs use DIDs, not server addresses — so all existing references remain valid after migration. See the ATProto account migration guide for the full process. Interoperable data Because data is stored on PDSs, not in application databases, users can switch apps without losing data. A claim created in App A is immediately readable by App B. Evaluations, attachments, and measurements created in different apps all reference the same underlying claims. This is fundamentally different from traditional platforms, where switching apps means starting over."
+ },
+ {
+ "path": "/core-concepts/cel-work-scopes",
+ "title": "CEL Work Scopes",
+ "description": "How CEL (Common Expression Language) makes hypercert work scopes machine-readable, composable, and queryable.",
+ "section": "Core Concepts",
+ "headings": [
+ "Why structured scopes matter",
+ "Architecture: two layers",
+ "Lexicon schemas",
+ "`org.hypercerts.ontology.celExpression`",
+ "`org.hypercerts.ontology.workScopeTag`",
+ "`activity.workScope` union",
+ "Examples",
+ "Mangrove restoration in coastal Kenya",
+ "Agroforestry in Uganda",
+ "What CEL unlocks",
+ "Funder matching",
+ "Evaluation matching",
+ "Overlap detection",
+ "Measurement-based queries",
+ "CEL context schema (v1)",
+ "Custom functions on `ScopeSet`",
+ "Starter tag vocabulary",
+ "Where CEL lives in the stack",
+ "Further reading"
+ ],
+ "body": "CEL Work Scopes Hypercert work scopes describe what work was done. A CEL expression alongside the human-readable work scope makes hypercerts machine-verifiable, composable, and queryable. CEL (Common Expression Language) is an open-source expression language built by Google for evaluating conditions in distributed systems. It's used by Kubernetes, Firebase, and Google Cloud IAM. Why structured scopes matter A community in coastal Kenya mints a hypercert for \"mangrove restoration and environmental education.\" A collective in Uganda creates one for \"agroforestry with beekeeping.\" A drone operator in the Amazon documents \"biodiversity monitoring in the Negro River region.\" These are all legible to a person reading them one at a time. They're invisible to any system trying to connect funders to relevant work at scale. As the Hypercerts Protocol grows on ATProto — with actions, evaluations, and evidence living as persistent, portable records — the richer the network becomes, the harder it is to find, compare, validate, and compose claims when the most important field is unstructured text. Architecture: two layers The design uses two complementary layers: 1. Vocabulary layer — workScopeTag records define what each tag means (e.g., what mangroverestoration refers to, its description, hierarchy, and links to external ontologies). 2. Composition layer — celExpression objects compose those tags into evaluable logic on activity records (e.g., \"this work includes mangrove restoration AND environmental education, AND is in Kenya\"). The vocabulary layer tells you what a tag means. The composition layer tells you what logic applies to a specific hypercert. The workScopeString fallback supports simple free-form scopes. Lexicon schemas org.hypercerts.ontology.celExpression A CEL expression object embedded inline in the activity.workScope union. It's intentionally an object type (not a record) so it can be embedded directly without requiring a separate collection. | Field | Type | Required | Description | |-------|------|----------|-------------| | expression | string | Yes | A CEL expression encoding the work scope conditions. Max 10,000 characters. | | usedTags | strongRef[] | Yes | Strong references to workScopeTag records used in the expression. Enables fast indexing by AT-URI and provides referential integrity. Max 100 entries. | | version | string | Yes | CEL context schema version. Known values: v1. Open enum — new versions can be added non-breakingly. | | createdAt | datetime | Yes | Client-declared timestamp when this expression was originally created. | org.hypercerts.ontology.workScopeTag A reusable scope atom — the building block of the vocabulary. Each record represents a single concept like mangroverestoration or biodiversitymonitoring. | Field | Type | Required | Description | |-------|------|----------|-------------| | key | string | Yes | Lowercase, underscore-separated machine-readable key (e.g., mangroverestoration). Used as the canonical identifier in CEL expressions. | | label | string | Yes | Human-readable display name. | | kind | string | No | Category type. Known values: topic, language, domain, method, tag. | | description | string | No | Longer explanation of the scope. | | parent | strongRef | No | Reference to a parent workScopeTag for taxonomy/hierarchy support. | | status | string | No | Lifecycle status. Known values: proposed, accepted, deprecated. Communities propose tags, curators accept them, deprecated tags point to replacements. | | supersededBy | strongRef | No | When status is deprecated, points to the replacement workScopeTag. | | aliases | string[] | No | Alternative names or identifiers for this scope. | | sameAs | uri[] | No | Links to equivalent concepts in external ontologies (e.g., Wikidata QIDs, ENVO terms, SDG targets). | | externalReference | uri \\| blob | No | External reference as a URI or blob. | | createdAt | datetime | Yes | Client-declared timestamp when this record was originally created. | activity.workScope union The workScope field on org.hypercerts.claim.activity accepts two variants: - celExpression — a structured, machine-evaluable scope (described above). - workScopeString — a simple free-form string for cases where a CEL expression isn't needed. Examples Mangrove restoration in coastal Kenya Agroforestry in Uganda What CEL unlocks Funder matching Funders can define their criteria as CEL expressions and the appview matches them against existing hypercerts: Evaluation matching An auditor who verified mangrove survival rates can express applicability as a CEL condition, and the appview automatically matches it to relevant hypercerts: Overlap detection When someone mints a new hypercert, CEL can check whether the claimed work scope overlaps with existing claims: Measurement-based queries Because CEL can access linked measurement records, funders can write queries that go beyond tags: CEL context schema (v1) Every CEL expression evaluates against a typed context. Ea"
+ },
+ {
+ "path": "/core-concepts/certified-identity",
+ "title": "Certified Identity",
+ "description": "How identity works in the Hypercerts ecosystem — DIDs, signing, portability, and wallet linkage.",
+ "section": "Core Concepts",
+ "headings": [
+ "Identity in the Hypercerts Protocol",
+ "How identity connects to the protocol",
+ "Certified: the reference identity provider",
+ "Handles (your public username)",
+ "Compatible with Bluesky and other AT Protocol accounts",
+ "Wallet linkage",
+ "Next steps"
+ ],
+ "body": "Certified Identity Every hypercert record has an author. Every evaluation carries a signature. Every funding receipt traces back to a DID. Identity is a core primitive of the protocol — it determines who owns records, who can be trusted, and who receives funding. Identity in the Hypercerts Protocol The Hypercerts Protocol uses AT Protocol's identity system. Every participant — whether an individual contributor, an evaluator, or an organization — is identified by a DID (Decentralized Identifier). A DID like did:plc:z72i7hdynmk6r22z27h6tvur is: - Permanent — it never changes, even if you switch servers or handles - Portable — your records, reputation, and history follow your DID across platforms - Cryptographically verifiable — every record you create is signed by your DID's key pair, and anyone can verify the signature Your DID resolves via the PLC directory to a DID document containing your current PDS, public signing keys, and handle. How identity connects to the protocol | Layer | How identity is used | |-------|---------------------| | Data | Every record (activity claims, evaluations, measurements) carries the author's DID. The PDS signs records into a Merkle tree, making authorship tamper-evident. | | Trust | Evaluators build reputation tied to their DID. Applications can weight evaluations based on the evaluator's history and credentials. | | Funding | Funding receipts link funder DIDs to the work they support. Wallet linkage (work-in-progress) connects DIDs to onchain addresses for payment flows and tokenization. | | Portability | Switching PDS providers doesn't change your DID. Your entire history — claims, evaluations, contributions — migrates with you. | Certified: the reference identity provider Certified is the identity provider built for the Hypercerts ecosystem. It provisions the full identity stack in a single sign-up: - A DID — your permanent identifier - A PDS — your Personal Data Server, where records are stored - Low-friction sign-in — email and code, no passwords or protocol knowledge required Certified exists because most Hypercerts users are not Bluesky users. Researchers, land stewards, open-source maintainers, and funders need an entry point that doesn't require knowledge of ATProto or decentralized protocols. Certified provides that — a neutral identity provider that isn't tied to any single application. Handles (your public username) Handles are not needed to log in to the Hypercerts ecosystem, but every user has one. They serve as human-readable names for publicly addressing others and for interacting with other applications in the AT Protocol ecosystem that haven't implemented email-based login with Certified. Your handle (e.g., alice.certified.app) is human-readable but not permanent — it's a pointer to your DID. Organizations can use custom domain handles (e.g., numpy.org) to prove organizational identity through DNS verification. For setup details, see Account & Identity Setup. Compatible with Bluesky and other AT Protocol accounts Hypercerts is fully interoperable with the AT Protocol ecosystem. If you already have a Bluesky account or any other ATProto identity, you can log in with your existing handle (e.g., alice.bsky.social) and use all Hypercerts applications — no additional account needed. Wallet linkage To receive onchain funding, a DID needs to be linked to an onchain wallet address. This is handled by IdentityLink — a cryptographic attestation system that binds a DID to one or more onchain addresses via a signed proof stored in your PDS. For the Ethereum ecosystem this looks like: 1. Authenticates the user via ATProto OAuth 2. Connects an EVM wallet (EOA, Smart Wallet, or Safe) 3. Signs an EIP-712 typed message proving ownership 4. Stores the attestation in the user's PDS The attestation is self-sovereign (stored in your PDS, not a central database) and verifiable by anyone. See the Roadmap for current IdentityLink status. Next steps - Account & Identity Setup — create an account, configure custom domains, manage app passwords, and set up organization accounts - Architecture Overview — how identity fits into the protocol stack - Quickstart — create your first hypercert Next: Why AT Protocol? — how identity and records stay portable across apps."
+ },
+ {
+ "path": "/core-concepts/common-use-cases",
+ "title": "Common Use Cases",
+ "description": "See how hypercerts work for different types of contributions.",
+ "section": "Core Concepts",
+ "headings": [
+ "Open-source software maintenance",
+ "Regenerative land stewardship",
+ "Scientific research",
+ "Community event organization"
+ ],
+ "body": "Common Use Cases Hypercerts work for any kind of impact work. This page shows four common scenarios and how they map to the hypercerts data model. Open-source software maintenance A team maintains a widely-used library. They create a hypercert covering a year of maintenance — bug fixes, documentation updates, and community support. The team then attaches links to the repository, release notes, and commit history. Contribution records identify who did what — core developers, documentation leads, community managers. Organizations that depend on the library can fund this work retroactively. Regenerative land stewardship A conservation group restores degraded forest over several years. The activity claim covers the full project timeline (2020–2025) with work scopes like \"Ecosystem Restoration\" and \"Biodiversity Conservation\". Measurement records track hectares restored, native species planted, and carbon sequestration estimates. Location records anchor the work geographically. Attachments include satellite imagery, biodiversity surveys, and field reports. Climate funders can review the full record before deciding to support the next phase. Scientific research A research team completes a multi-year study and wants to document the effort with clear contributor roles. The activity claim describes the research and its outputs, with contribution records identifying the principal investigator, postdocs, and graduate students along with their relative contributions. Attachments link to published papers (via DOI), lab notebooks, and experimental protocols. Evaluation records capture peer review outcomes. Research foundations or industry partners interested in the field can fund the work. Community event organization A group runs regular workshops teaching practical skills to underrepresented communities. They want to document their educational impact. The activity claim covers the full year of workshops with work scopes like \"Education\" and \"Community Building\". Measurement records track total attendees, completion rates, and outcomes. Contribution records identify instructors, venue hosts, and curriculum developers. Attachments include workshop materials and participant feedback. Organizations with community programs can review the record and decide to fund future sessions."
+ },
+ {
+ "path": "/core-concepts/funding-and-value-flow",
+ "title": "\"Funding & Value Flow\"",
+ "description": "How hypercerts track the funding of activities.",
+ "section": "Core Concepts",
+ "headings": [
+ "Hypercerts work with any funding mechanism",
+ "Tracking funding",
+ "Tokenization",
+ "Example: from creation to funding",
+ "Stage 1 — Creation and evaluation",
+ "Stage 2 — Funding",
+ "Stage 3 — Locking",
+ "Stage 4 — Retroactive funding",
+ "See also"
+ ],
+ "body": "Funding & Value Flow Funding is under active development. Hypercerts track the funding of activities without prescribing how funds flow — any payment method and any funding mechanism works. What hypercerts add is a structured, verifiable record of who funded what. Hypercerts work with any funding mechanism Funding can be prospective (before work begins) or retroactive (after outcomes are demonstrated). Different mechanisms suit different contexts, for example: | Mechanism | Description | |-----------|-------------| | Grant funding | Funders award grants to support planned activities | | Milestone-based funding | Funds are released as work reaches defined milestones | | Prize competitions | Awards for achieving specific outcomes | | Quadratic funding | Small donations amplified through matching pools | | Sale of impact certificates | Funders purchase certificates representing completed work | | Auction of impact certificates | Competitive bidding on verified impact claims | Multiple mechanisms can coexist for the same activity — a project might receive a grant prospectively and sell impact certificates retroactively. Hypercerts tracks this accurately without double counting: every funding receipt references the specific activity claim it funds, making it possible to compute total funding per claim across all receipts. Tracking funding Funding tracking is in active development. Receipts and acknowledgements exist today. Hypercerts separate the tracking of funding from the flow of funds. Any existing payment infrastructure can work with hypercerts — the protocol simply records the fact that funding happened. The protocol tracks funding through the org.hypercerts.funding.receipt record. A funding receipt records who funded which activity, how much, and when — creating a verifiable funding trail. The receipt references the activity claim it funds, linking the funding record to the work it supports. Funding receipts are typically created by a facilitator — a payment processor, grant platform, funding app, or other intermediary that processes the payment and creates the receipt. The facilitator acts as a neutral third party, giving the receipt more credibility than a self-reported claim. | Scenario | Facilitator | Verification | |----------|-------------|--------------| | Onchain funding | A funding app verifies the transaction and creates the receipt, linking the transaction hash and chain ID | Verifiable onchain | | Card / bank transfer | A payment processor settles the payment and creates the receipt | Trust in the processor | | Grant platform | The grant platform records the award and creates the receipt on behalf of the funder | Trust in the platform | The funder or the contributor can then create an acknowledgement — a counter-signature confirming the receipt's accuracy — to strengthen its credibility. The protocol does not enforce that projects or funders disclose their funding — creating a receipt is voluntary. Funders can also choose to remain anonymous in the public record; a receipt can track a contribution without revealing the funder's identity. Tokenization Tokenization is under active development. This section describes the planned architecture. A hypercert can optionally be wrapped in an onchain token. This gives funders a programmable proof of their contribution. Tokenization is an optional wrapper around a claim snapshot; the canonical record remains the AT Protocol data. When locking is available, a claim can be frozen before tokenization. This gives funders a stronger guarantee — the claim they reviewed is exactly the claim they funded, and it cannot change after the fact. | Property | Detail | |----------|--------| | Token standards | ERC-20, ERC-1155, or custom — different standards on different chains | | Transferability | Ranges from non-transferable recognition to fully transferable certificates | | Single-wrap constraint | Every claim can only be wrapped in a token once, preventing double counting | | Rights | Optional definition of the rights of the owners, set in the hypercert's org.hypercerts.claim.rights record | Tokenization enables programmable funding — smart contract logic can enforce distribution rules, matching formulas, and other mechanisms that would be difficult to coordinate offchain. Example: from creation to funding There are many different flows that can be represented with hypercerts. Below is one example that follows a hypercert from creation to funding. Stage 1 — Creation and evaluation Alice plants 500 trees in a reforestation project and creates an activity claim with measurements and attachments. Bob, an environmental auditor, evaluates the claim from his own PDS. See Quickstart for a walkthrough. Stage 2 — Funding Carol, a climate funder, reviews Alice's claim and Bob's evaluation. She decides to fund Alice's work through her organization's grant platform. The payment facilitator processes the payment and creates a funding receipt, recording Carol's contribution an"
+ },
+ {
+ "path": "/core-concepts/hypercerts-core-data-model",
+ "title": "Core Data Model",
+ "description": "The data model behind hypercerts — record types, dimensions, and how they connect.",
+ "section": "Core Concepts",
+ "headings": [
+ "The core record: activity claim",
+ "Additional details",
+ "Records that attach to a hypercert",
+ "Additional notes",
+ "Grouping hypercerts",
+ "How records connect",
+ "Mutability",
+ "What happens next"
+ ],
+ "body": "Core Data Model A hypercert is an activity claim with linked records that describe work done. The activity claim is the anchor — contributions, attachments, measurements, and evaluations reference it to add context. This page explains what records exist, what they contain, and how they connect. The core record: activity claim Every hypercert starts with an activity claim — the central record that answers four questions: | Dimension | Question | Example | |-----------|----------|---------| | Contributors | Who is doing (or did) the work? | Alice, Bob | | Work scope | What are they doing (or what did they do)? | Documentation, Reforestation | | Time of work | When is it happening (or when did it happen)? | January – March 2026 | | Location | Where is it taking (or did it take) place? | Coastal Kenya | The activity claim gets a permanent AT-URI like at://did:plc:alice123/org.hypercerts.claim.activity/3k7. Additional details The activity claim has a contributors array. Each entry is a contributor object with three fields: - contributorIdentity — either an inline identity object (#contributorIdentity, containing an identity DID string) or a strong reference to an org.hypercerts.claim.contributorInformation record with a full social profile - contributionWeight — an optional relative weight string (e.g. \"1\", \"0.5\") - contributionDetails — either an inline role object (#contributorRole, containing a role string) or a strong reference to an org.hypercerts.claim.contribution record with structured contribution data Simple cases use inline objects directly in the activity claim. Richer profiles use separate records that the contributor or project lead creates independently. | Record type | What it adds | Who creates it | Lexicon | |-------------|-------------|----------------|---------| | Contributor Information | Social profile, image, display name | The contributor or project lead | org.hypercerts.claim.contributorInformation | | Contribution | Structured role and contribution data | The contributor or project lead | org.hypercerts.claim.contribution | Records that attach to a hypercert Other records link to the activity claim to add context. Again, each is a separate record with its own AT-URI – they reference the activity claim, not the other way around. The following diagram shows record types and how they reference the activity claim. Records can be created by different people and live in different repositories. The diagram includes a token entity — tokenization (anchoring a hypercert onchain) is not yet implemented. | Record type | What it adds | Who creates it | Lexicon | |-------------|-------------|----------------|---------| | Attachment | Supporting documentation — URLs, uploaded files, IPFS links. Can link to any record type, not only activity claims. | Anyone with additional data | org.hypercerts.claim.attachment | | Measurement | Quantitative data — \"12 pages written\", \"50 tons CO₂ reduced\" | E.g. a third-party measurer or the project (self-reported) | org.hypercerts.claim.measurement | | Evaluation | An (independent) assessment of the work | E.g. a third-party evaluator, community members, beneficiaries | org.hypercerts.claim.evaluation | Additional notes - Records don't have to be created together. Users can create a measurement first and link it to an activity claim later. - A record can also be linked to multiple other records, e.g. a measurement in a bioregion is linked to multiple activity claims. - An evaluator creates an evaluation from their own account — it references an activity claim but lives in their personal data server. This means a hypercert grows over time – it is a living record. The core claim stays the same, but attachments, measurements, and evaluations accumulate around it. Grouping hypercerts Hypercerts can be grouped into collections. A multi-year project might have one hypercert per year, with a collection representing the full project. But collections are flexible — anyone can create one for any purpose. Someone might curate a personal collection of hypercerts they find interesting, or an organization might group all their hypercerts together. A hypercert can belong to many collections. | Record type | What it adds | Who creates it | Lexicon | |-------------|-------------|----------------|---------| | Collection | Groups activity claims and/or other collections into a project or portfolio. Supports recursive nesting. | E.g. the project organizer | org.hypercerts.collection | How records connect Records reference each other using strong references — if a referenced record is modified after the reference was created, the change is detectable. Mutability Activity claims and their linked records are currently immutable once created. Record versioning and edit history will be supported in a future release, along with the ability to lock a hypercert at a specific version for funding. What happens next Once you understand the data model, you're ready to build: - Quickstart — create "
+ },
+ {
+ "path": "/core-concepts/what-is-hypercerts",
+ "title": "What are Hypercerts?",
+ "description": "Living digital records of impact that help you track, share, and get recognized for the work you do.",
+ "section": "Core Concepts",
+ "headings": [
+ "The structure of a hypercert",
+ "What a hypercert is not",
+ "How people use them",
+ "An example",
+ "Why it's built this way",
+ "Next step"
+ ],
+ "body": "What are Hypercerts? A hypercert is a living record of impact work — a verifiable claim that grows as the creator and others add measurements, evaluations, and supporting evidence. Think of it like this: you do meaningful work — restoring a forest, maintaining open-source software, running a community program, publishing research. A hypercert captures that work in a structured, verifiable record. Over time, the creator and others enrich it with measurements and evaluations — making it more trustworthy and more useful for recognition and funding. The structure of a hypercert At its core, a hypercert answers four questions: - Who is doing (or did) the work? - What are they doing (or what did they do)? - When is it happening (or when did it happen)? - Where did it happen? (physical or digital) That's the starting point. From there, the record grows as people add more context: - Attachments — photos, links, datasets, documents, or descriptions that substantiate the work - Measurements — quantitative indicators that make the impact concrete (\"142 issues resolved\", \"50 hectares restored\"), which can be outputs or outcomes depending on the domain - Evaluations — independent qualitative or quantitative assessments from domain experts, community members, beneficiaries, etc. - Contributions — additional information about who was involved and what they contributed - Rights — what rights are attached to the hypercert (e.g. public display) What a hypercert is not - Not a grant application — it records work that has been done or is in progress, not a request for funding - Not a token — it's a data record, though onchain tokenization for funding is planned - Not a single document — it’s a collection of linked records that can grow over time How people use them If you're doing the work, you create a hypercert to make your contributions visible. Instead of writing reports that sit in a folder, you publish a verifiable record that any platform can display and build on. If you're evaluating work, you add your assessment to someone else's hypercert. Your evaluation lives on your own data server linking to their work. You build up reputation over time based on your assessments. If you're funding work, you can see the full picture before deciding: the original claim, the attachments behind it, and what independent evaluators think. Funding decisions can be based on verifiable records — not just narratives, wordy applications, and guessing. If you're building a platform, you can read and write hypercerts using shared schemas. A funding platform, a project dashboard, and an evaluation tool can all work with the same data. An example Say a team runs a coastal reforestation project. They create a hypercert: > Coastal mangrove restoration, 2025 > > 50 hectares restored over 12 months (the activity claim). Satellite imagery confirms canopy coverage. An independent ecologist evaluates the work as \"high-quality restoration with strong community engagement.\" The activity claim is the starting record. Over the following months, the team adds measurement data as new satellite imagery comes in. An independent evaluator reviews the project and attaches their assessment. A funder browsing the ecosystem sees the full picture — the claim, the evidence, and the evaluation — and decides to support the next phase. Why it's built this way Hypercerts are designed to live beyond any single platform. This is why we built hypercerts on AT Protocol, a decentralized data layer that also powers Bluesky. This gives hypercerts some important properties: - You own your data. Your hypercerts live on your Personal Data Server (PDS) or on a hosted PDS of your choice, not on a single platform. - It's portable. You can move your data to a different server anytime. No lock-in. - It's verifiable. Every record is cryptographically signed. Anyone can check that it hasn't been tampered with. - It works everywhere. Any app that speaks the Hypercerts protocol can read and display your records. Learn more in the Architecture Overview. Next step To see the records that make this work, read the Core Data Model."
+ },
+ {
+ "path": "/core-concepts/why-at-protocol",
+ "title": "Why AT Protocol?",
+ "description": "Why the Hypercerts Protocol is built on AT Protocol.",
+ "section": "Core Concepts",
+ "headings": [
+ "Portable, user-controlled data",
+ "Shared schemas across applications",
+ "A decentralized trust graph",
+ "Data layer + ownership layer"
+ ],
+ "body": "Why AT Protocol? The Hypercerts Protocol is built on AT Protocol — the same open protocol for decentralized data that powers Bluesky. In Hypercerts v0.1, every hypercert was an on-chain token — publishing one required a wallet, gas fees, and a blockchain transaction. This created friction for the contributors, researchers, and organizations the protocol is designed to serve. By moving the data layer to ATProto, creating a hypercert requires no wallet, no gas, and no transaction fees. On-chain anchoring is reserved for where it actually matters: funding and settlement. ATProto gives hypercerts three properties that matter for impact funding: portable data, shared schemas, and a trust graph rooted in cryptographic identity. Portable, user-controlled data Contributions must outlive any single platform. Hypercert records are stored in signed user repositories hosted on Personal Data Servers (PDS). Each repository is cryptographically tied to a user's DID — not to the server hosting it. Contributors, evaluators, and funders choose where their data lives: on Hypercerts Foundation infrastructure, on third-party providers, or self-hosted. They can migrate at any time without losing records or needing anyone's permission. Applications are views over user-owned data — not gatekeepers of it. See Portability & Data Access. Shared schemas across applications For impact funding to work across applications, a contribution recorded in one app must be evaluable in another and fundable in a third — without bespoke integrations. ATProto enables this through lexicons: shared, namespaced schemas that define how records are structured. Because lexicons are open, any app can create compatible records and any app can read them. No bilateral API integrations required. Records reference each other via AT-URIs, forming a traversable graph: an evaluation references an activity claim, a funding receipt references both the claim and the funder. This graph is what indexers crawl to build queryable views. See Indexers & Discovery. A decentralized trust graph ATProto provides persistent, portable identities via Decentralized Identifiers (DIDs). Every record carries its author's DID and cryptographic signature. Over time, these identities accumulate contribution records, evaluations, endorsements, and funding decisions — forming a durable impact trust graph that persists across platforms. Trust becomes computable across the ecosystem — not siloed within individual platforms. A funder can trace who evaluated a project, what else those evaluators have assessed, and how their past judgments correlated with outcomes. Because all records are signed and publicly indexable, trust models can be independently implemented, compared, and audited. Data layer + ownership layer The design principle: keep rich, evolving contribution data off-chain (ATProto) and use on-chain systems only where immutability and settlement matter. ATProto handles the data layer — claims, attachments, evaluations, trust signals. On-chain anchoring and tokenization handle the funding layer — immutable snapshots, programmable funding, and settlement mechanisms. See Architecture Overview for how the layers fit together."
+ },
+ {
+ "path": "/ecosystem/why-we-need-hypercerts",
+ "title": "\"Why We Need Hypercerts\"",
+ "description": "Recognizing and rewarding value creators.",
+ "section": "Ecosystem & Vision",
+ "headings": [
+ "The Problem",
+ "The Shift We Need: Value Recognition Networks",
+ "Hypercerts: A Common Language for Contributions",
+ "What Hypercerts Unlock",
+ "Who Buys Hypercerts?",
+ "Where We're Headed"
+ ],
+ "body": "Why We Need Hypercerts The Problem A Simple Observation Let's start with a simple observation: lots of the work that benefits us the most is funded the least. We underfund digital public goods. We underfund the regenerative land projects that protect ecosystems and communities. We underfund high-risk, high-potential research that sits between academia and commercialization. We underfund investigative journalism and community-run events. In short: our economic system fails to recognize and reward what we collectively value. Because the people doing this work don't receive enough recognition or support, too few talented people pursue it, too few resources flow into it — and we end up far worse off than we could be. Why This Happens: We Confuse Price with Value Markets are extraordinary information systems. They take millions of individual decisions — what we buy, what we ignore, what we're willing to pay for — and compress them into a single signal: price. Price works beautifully for private goods: food, tools, cars, software services, consumer technology. These are things you buy for yourself, based on your individual preferences. But price fails systematically when it comes to collective goods. If a forest is healthy, everyone benefits. If scientific knowledge increases, everyone benefits. If investigative journalism holds power accountable, everyone benefits. But because these benefits are shared, the willingness to pay for them isn't concentrated in any one person. And when benefits are diffuse, markets often go blind. Price easily sees private value, but it struggles to see collective value. Often, governments have been the approach we solve this. And in many cases, they still do: public education, healthcare systems, basic research funding, environmental regulation. But the systems we rely on today were built for a world that was: slower, simpler, more centralized, less data-rich, and far less interconnected. They were not designed for global supply chains, digital ecosystems, environmental instability, decentralized science, or community-run movements. And so we're left with a structural mismatch: Markets don't recognize collective value well. Governments struggle to keep up with the pace and complexity of today's world. The Shift We Need: Value Recognition Networks If our existing institutions struggle to recognize and reward collective value, we need systems that can. Not replacements for markets. Not replacements for governments. But a new layer that complements both — systems designed specifically to make collective value visible. We call these value recognition networks. Value recognition networks are systems built to do three things: 1. Identify the work that contributes to a shared goal 2. Evaluate that work using data, expertise, and community insight 3. Recognize and fund the people and organizations doing it Until recently, building such systems would have been impractical: too much information to collect, too many evaluators to coordinate, too many data sources to integrate. But the technological foundations have changed. We now have the tools to make value recognition networks feasible. This also makes our contribution possible: The Hypercerts Protocol. Hypercerts provide a simple, interoperable way to structure claims about who did what, when, and where — and to attach the evidence and evaluations needed to make those claims meaningful. This evidence layer turns hypercerts from impact claims into impact certificates that can be held by both financial and non-financial contributors. With Hypercerts, we can begin building value recognition networks in practice. So what exactly are Hypercerts, and how do they work? Hypercerts: A Common Language for Contributions What They Are At its core, a hypercert is a structured digital record of a contribution: who did what, when, and where. You can think of it as a small, standardized piece of information that captures a specific unit of work. A hypercert can represent: a contribution to an open-source repository a regenerative land activity on a defined plot a research milestone in a scientific project a community event or educational program any action that creates positive impact Each hypercert links this claim with whatever attachments or context is available: documentation, measurements, expert assessments, community input, or other data that helps evaluate the contribution. Hypercerts don't judge whether something is valuable. They don't impose a single metric or worldview. They simply make contributions legible – to people, organizations, communities, and increasingly, AI systems. Once contributions are legible, many different actors can use the same underlying information to evaluate work, coordinate funding, reward contributors, and build trust. Hypercerts turn impact from something abstract and dispersed into something structured, portable, and usable across platforms. In short: a hypercert is a common language for describing contributions — a"
+ },
+ {
+ "path": "/getting-started/building-on-hypercerts",
+ "title": "Building on Hypercerts",
+ "description": "A guide for platforms and tools that want to integrate the Hypercerts Protocol.",
+ "section": "Get Started",
+ "headings": [
+ "Who This Is For",
+ "What You Can Build",
+ "Funding Platforms",
+ "Evaluation Tools",
+ "Dashboards & Explorers",
+ "Impact Portfolios",
+ "Automated Agents",
+ "Running an Indexer",
+ "Interoperability Principles",
+ "Use Standard Lexicons",
+ "Use Strong References",
+ "Next Steps"
+ ],
+ "body": "The Hypercerts Protocol is designed for third-party platforms, tools, and services to build on. Who This Is For This guide is for: - Funding platform developers building crowdfunding, retroactive funding, or milestone-based payout systems - Evaluation service providers creating tools for domain experts to assess impact - Dashboard and explorer builders aggregating and visualizing hypercert data across the ecosystem - Impact portfolio managers tracking funded contributions and their outcomes - AI and automation developers building agents that create measurements, flag inconsistencies, or assist evaluators What You Can Build Funding Platforms Create platforms that use hypercerts to structure contributions and distribute funding. Examples include: - Retroactive funding rounds where evaluators assess completed work - Milestone-based grant systems that release funds as work progresses - Crowdfunding campaigns where backers receive fractional rights to impact claims - Quadratic funding mechanisms that allocate matching pools based on community support Evaluation Tools Build services that help domain experts create structured, verifiable evaluations: - Peer review systems for scientific contributions - Impact assessment frameworks for climate projects - Code quality analysis for open source software - Educational outcome measurement for learning programs Dashboards & Explorers Aggregate and display hypercerts across the ecosystem: - Portfolio views showing all claims by a contributor - Leaderboards ranking projects by evaluation scores - Impact maps visualizing geographic distribution of work - Timeline views tracking contribution history Read-only integrations require only an indexer connection — no PDS needed. Impact Portfolios Help funders track their contributions: - Aggregate all hypercerts a funder has supported - Monitor evaluation updates for funded work - Calculate portfolio-level impact metrics - Generate reports for stakeholders Automated Agents Build AI systems that participate in the ecosystem: - Measurement bots that extract metrics from attachments - Consistency checkers that flag suspicious claims - Evaluation assistants that help experts assess work - Discovery agents that match funders with relevant projects Running an Indexer To query hypercerts efficiently, run your own indexer: 1. Subscribe to the relay firehose for hypercert lexicon records 2. Parse and validate incoming records against lexicon schemas 3. Store in a queryable database (PostgreSQL, MongoDB, etc.) 4. Expose an API for your application to query For relay subscription details, see the ATProto documentation. Interoperability Principles The ecosystem works because platforms follow shared conventions: Use Standard Lexicons Use the standard org.hypercerts. and app.certified. lexicons for data that fits them — this is what makes your data interoperable across the ecosystem. Default indexers subscribe to these namespaces, so records using standard lexicons are automatically discoverable. If you need additional fields to extend the standard lexicons, create a sidecar record that references the standard record via a strong reference. Since sidecars are likely application-specific, default indexers won't see them unless explicitly configured to index your namespace. You're also free to create new lexicons for use cases that don't fit the original schemas — ATProto is designed for this. Use Strong References Always include CID when referencing records. The CID is a content hash of the record at the time you referenced it — if the record is later modified, the CID won't match, making tampering detectable. Next Steps - Read the Lexicons reference to understand the data model - Explore the Architecture overview to see how components fit together - Try the Quickstart to create your first hypercert - Join the community to discuss your integration plans"
+ },
+ {
+ "path": "/getting-started/quickstart",
+ "title": "Quickstart",
+ "description": "Create your first hypercert in under 5 minutes.",
+ "section": "Get Started",
+ "headings": [
+ "Install dependencies",
+ "Authenticate",
+ "Create your first hypercert",
+ "Add contributions",
+ "Attach supporting documentation",
+ "Add a measurement",
+ "Add an evaluation",
+ "What you've built",
+ "Create via the Scaffold App",
+ "Step 1 — Sign in",
+ "Step 2 — Basic info",
+ "Step 3 — Add attachments",
+ "Step 4 — Add location *(optional)*",
+ "Step 5 — Add measurements *(optional)*",
+ "Step 6 — Add evaluations *(optional)*",
+ "Step 7 — Done",
+ "Next steps"
+ ],
+ "body": "Quickstart Create your first hypercert. This guide uses TypeScript and Node.js v20+. Install dependencies Building a React app? Use @atproto/oauth-client-browser instead, alongside @tanstack/react-query for data fetching and caching. Authenticate Authentication uses AT Protocol OAuth. Your app needs a client metadata document hosted at a public URL: See the AT Protocol OAuth documentation for full details on client metadata, session storage, and keyset configuration. For further info on how to set up OAuth you can check out AT Protos node.js implementation tutorial or the scaffold app. Create your first hypercert The activity claim is the core record — it describes what work was done, when, and in what scope. Here's how each field maps to the activity lexicon: - Contributors are embedded directly in the activity claim as a contributors array. Each entry has a contributorIdentity (inline DID string, or a strong reference to a contributorInformation record), an optional contributionWeight, and an optional contributionDetails (inline role string, or a strong reference to an org.hypercerts.claim.contribution record for richer detail). - Work scopes can be a simple free-form string ({ scope: \"Documentation\" }) or a structured CEL expression for machine-evaluable queries across the network. - Time is expressed as startDate and endDate in ISO 8601 format. - Locations are separate app.certified.location records referenced from the activity claim. They support coordinates, GeoJSON, and other formats. Each AT-URI is a permanent, globally unique identifier. Other records (evaluations, attachments, measurements) reference your hypercert using its URI. The CID is a content hash that makes references tamper-evident. Save both — you'll need them to link other records to this hypercert. See the Activity Claim lexicon for the complete schema. Add contributions Contributors are embedded directly in the activity claim's contributors array. Each entry uses inline identity and role objects: Each contributor entry has: - contributorIdentity — inline #contributorIdentity with a DID string, or a strong reference to an org.hypercerts.claim.contributorInformation record - contributionWeight — relative weight as a string (weights don't need to sum to 100) - contributionDetails — inline #contributorRole with a role string, or a strong reference to an org.hypercerts.claim.contribution record for richer detail Attach supporting documentation Attachments link supporting documents, reports, or URLs to any record. Create an attachment record that references the hypercert: The subjects field is an array of strong references (AT-URI + CID). The content field is an array of org.hypercerts.defs#uri objects (for URLs) or org.hypercerts.defs#smallBlob objects (for file uploads). You can create multiple attachment records — one for the repo, one for the deployed site, one for a methodology document, etc. Add a measurement Measurements record quantitative data about the work. Create a measurement record that references the hypercert: Required fields are metric, value, and unit. The subjects array links the measurement to your hypercert via strong references. You can attach multiple measurements to the same hypercert — one per metric. Add an evaluation Evaluations are third-party assessments of the work. They reference the hypercert via a single subject strong reference and are typically created by someone other than the hypercert author: Required fields are evaluators, summary, and createdAt. Unlike attachments and measurements, subject is a single strong reference (not an array). The optional score object takes integer min, max, and value fields. What you've built Your hypercert now has a complete structure: Contributors live inside the activity claim record itself. External records — attachments, measurements, evaluations — link to the activity claim via strong references (AT-URI + CID). The CID is a content hash: if the referenced record changes, the hash won't match, making the entire structure tamper-evident. This is the same pattern described in the Core Data Model. As the hypercert grows over time, third parties can add measurements, evaluations, and more attachments — each as a separate record referencing your activity claim. Create via the Scaffold App If you don't want to write code, the Scaffold app lets you create a full hypercert through a guided wizard. Visit hypercerts-scaffold.vercel.app to get started. Step 1 — Sign in Enter your ATProto handle (e.g. yourname.certified.app or yourname.bsky.social) on the sign-in screen. You'll be redirected to your PDS to authorize the app. Once you approve, you'll land on the home screen with your DID and display name visible. !Scaffold sign-in screen showing handle input field Enter your ATProto handle to authenticate via OAuth. Step 2 — Basic info Click \"Create a new hypercert\" on the home screen (or go directly to hypercerts-scaffold.vercel.app/hypercerts/create). This opens a multi-step wizar"
+ },
+ {
+ "path": "/getting-started/testing-and-deployment",
+ "title": "Testing & Deployment",
+ "description": "Test your integration, understand record constraints, and go live with confidence.",
+ "section": "Get Started",
+ "headings": [
+ "Local development",
+ "Set up a test PDS",
+ "Use test identities",
+ "Create and verify a test record",
+ "Clean up test data",
+ "Record constraints",
+ "Required fields",
+ "Datetime format",
+ "Strong references",
+ "String and array limits",
+ "Blob uploads",
+ "Common issues",
+ "Rate limits",
+ "Privacy",
+ "Records are public by default",
+ "Deletion and GDPR",
+ "Authentication in production",
+ "Going live checklist",
+ "See also"
+ ],
+ "body": "Testing & Deployment This page covers how to test your Hypercerts integration locally, the validation rules and constraints your records must satisfy, privacy considerations, and a checklist for going live. --- Local development Set up a test PDS Run a local PDS to avoid polluting production data. Self-host a test instance using the ATProto PDS distribution and follow the ATProto self-hosting guide. Point your ATProto client to the local instance instead of production: Use test identities Create dedicated test accounts — never use production identities for testing. When running a local PDS, you can create accounts with any handle. Each test identity gets its own DID and repository, isolating test data from production. Create and verify a test record Create a record using the same createRecord call from the Quickstart, then read it back to confirm it was stored correctly. The returned CID is a content hash — if the record changes, the CID changes, which is how you verify data integrity. Clean up test data Delete test records when you're done to keep your repository clean: Deletion removes the record from your PDS. Cached copies may persist in indexers temporarily. --- Record constraints The PDS itself is schema-agnostic — it will accept any record with a valid $type. Validation against lexicon schemas happens downstream: indexers and app views ignore or reject malformed records, and client libraries may validate before submission. To ensure your records are indexed and usable across the ecosystem, they should conform to the lexicon schemas. Required fields Every record type has required fields defined in its lexicon. Records missing required fields will be accepted by the PDS but ignored by indexers. See the lexicon reference for required fields per record type. Datetime format All datetime fields must use ISO 8601 format (e.g., 2026-01-15T00:00:00Z). Strong references When one record references another (e.g., an evaluation referencing an activity claim), the reference must include both the AT-URI and the CID. The CID is a content hash — if the referenced record is later modified, the CID won't match, making tampering detectable. If you need the current CID, fetch the record with getRecord before creating the reference. String and array limits Lexicon schemas define maximum lengths for strings (in bytes and Unicode grapheme clusters) and arrays. Check the lexicon reference for specific limits on each field. Blob uploads Blobs (images, documents, attachment files) are uploaded to the PDS separately from records. Size limits depend on the PDS implementation — check your PDS documentation for exact values. If your attachment files are too large for blob upload, store them externally (e.g., on IPFS or a public URL) and reference them by URI in the attachment record. Common issues | Issue | Cause | Fix | |-------|-------|-----| | Missing required field | Record omits a field the lexicon marks as required | Include all required fields — see the lexicon reference | | Invalid datetime | Datetime not in ISO 8601 format | Use format: 2026-01-15T00:00:00Z | | Invalid strong reference | Reference missing uri or cid | Include both fields — fetch the latest CID if needed | | String too long | String exceeds maxLength or maxGraphemes | Truncate or validate before submission | | Array too long | Array exceeds maxLength | Reduce array size | | Blob too large | File exceeds PDS size limit | Compress, split, or use external storage with a URI | --- Rate limits PDS implementations impose rate limits on API requests. Specific limits vary by PDS — check your provider's documentation. If you hit a rate limit, retry with exponential backoff. --- Privacy Records are public by default All ATProto records are public. Anyone can read records from any PDS. Never store sensitive personal data in hypercert records. Include in records: - Public work descriptions (e.g., \"Planted 500 trees in Borneo\") - Aggregated impact metrics (e.g., \"Reduced CO₂ by 50 tons\") - Public contributor identities (DIDs, handles) - Links to public attachments (URLs, IPFS CIDs) Keep off-protocol: - Personal contact information (email, phone, address) - Proprietary methodologies or trade secrets - Participant consent forms or private agreements - Raw data containing PII (personally identifiable information) Store sensitive data in a private database and reference it by ID if needed. Deletion and GDPR You can delete records from your PDS at any time. However: - Indexers (like Hyperindex) may cache records and take time to update - Other users may have already fetched and stored copies - The deletion event itself is visible in your repository history If you accidentally publish PII, delete the record immediately and contact indexer operators to request cache purging. --- Authentication in production Use OAuth for production applications. OAuth lets users authorize your app without sharing credentials. See the Quickstart for the authentication setup and the ATProto OA"
+ },
+ {
+ "path": "/getting-started/working-with-evaluations",
+ "title": "Working with Evaluations",
+ "description": "Learn how to evaluate hypercerts and build trust in the ecosystem.",
+ "section": "Get Started",
+ "headings": [
+ "Create an evaluation",
+ "Add measurements",
+ "Evaluation patterns",
+ "Trust and reputation"
+ ],
+ "body": "Working with Evaluations Evaluations are third-party assessments of hypercerts and other claims. They live on the evaluator's own PDS, not embedded in the original claim, and accumulate over time as different actors provide their perspectives. Create an evaluation The subject is a strong reference (AT-URI + CID) to the claim being evaluated. The evaluators array contains DIDs of those conducting the assessment. Add measurements Measurements provide quantitative data that supports your evaluation: The subject field is a strong reference (AT-URI + CID) to the claim being measured. You can then reference this measurement in an evaluation's measurements array (an array of strong references) to link quantitative data to your assessment. Evaluation patterns Expert review: Domain experts assess technical quality, methodology, and impact. Their DID becomes a portable credential — other projects can discover and trust evaluations from recognized experts. Community assessment: Multiple stakeholders provide independent evaluations. The diversity of evaluator DIDs creates a richer signal than any single assessment. Automated evaluation: Scripts and bots can publish evaluations based on API metrics, external data sources, or other programmatic checks. The evaluator DID identifies the automation system and its operator. Trust and reputation Every evaluation is signed by its creator's DID, creating accountability. Unlike anonymous reviews, evaluators build portable reputation across the ecosystem. A DID with a history of rigorous, accurate evaluations becomes a trusted signal. Projects can filter evaluations by evaluator identity, weight them differently, or build custom trust graphs based on their values and domain expertise. Evaluations are append-only. You can't delete someone else's evaluation of your work, and they can't delete yours. This creates a permanent, multi-perspective record of how claims are assessed over time."
+ },
+ {
+ "path": "/",
+ "title": "Hypercerts Documentation",
+ "description": "",
+ "section": "Get Started",
+ "headings": [
+ "Get started",
+ "Core concepts",
+ "Tools",
+ "Architecture",
+ "Reference",
+ "Ecosystem & Vision"
+ ],
+ "body": ""
+ },
+ {
+ "path": "/lexicons/certified-lexicons/badge-award",
+ "title": "Badge Award",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Badge Award app.certified.badge.award Awards a badge to a user, project, or activity claim. Each award references a badge definition and a subject — either an account DID (for user badges) or a strong reference to a specific record like an activity claim (for project or work badges). Badge awards can include an optional note explaining why the badge was given. This creates a public record of recognition that can be verified by anyone. Recipients can respond to badge awards using the badge response lexicon, allowing them to accept or reject badges and assign relative weights to accepted badges. This enables consent-based recognition where recipients have control over what badges appear on their profile. For the full schema, see app.certified.badge.award in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/certified-lexicons/badge-definition",
+ "title": "Badge Definition",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Badge Definition app.certified.badge.definition Defines a badge that can be awarded to users, projects, or activity claims. Each badge definition includes a title, description, icon, and badge type (like endorsement, participation, or affiliation). Badge definitions can optionally specify an allowlist of DIDs that are permitted to issue the badge. If no allowlist is provided, anyone can create badge awards using this definition. This enables both open badges (where anyone can award them) and restricted badges (where only specific issuers are authorized). Once created, a badge definition can be referenced by multiple badge award records. This separation allows the badge's visual identity and meaning to be defined once and reused many times. For the full schema, see app.certified.badge.definition in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/certified-lexicons/badge-response",
+ "title": "Badge Response",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Badge Response app.certified.badge.response A recipient's response to a badge award. Recipients can accept or reject badges, and can optionally assign a relative weight to accepted badges to indicate their importance or relevance. Badge responses are created in the recipient's own repository, making them independently verifiable. This enables consent-based recognition — rather than assuming that receiving a badge implies acceptance, responses make consent explicit. The weight field allows recipients to curate their badge display, emphasizing badges they consider most significant. Applications can use these weights to determine how prominently to display different badges on a user's profile. For the full schema, see app.certified.badge.response in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/certified-lexicons/index",
+ "title": "Certified Lexicons",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Certified Lexicons These lexicons live in the app.certified namespace. They provide shared data structures for identity, profiles, badges, and location — building blocks that other, more specialized lexicons can reference. | Lexicon | NSID | Description | |---------|------|-------------| | Shared Definitions | app.certified.defs | Common type definitions: the did type | | Location | app.certified.location | Geographic location representation using the Astral Location Protocol | | Profile | app.certified.actor.profile | Account profile with display name, description, avatar, and banner | | Badge Definition | app.certified.badge.definition | Defines a badge type with title, icon, and optional issuer allowlist | | Badge Award | app.certified.badge.award | Awards a badge to a user, project, or activity claim | | Badge Response | app.certified.badge.response | Recipient accepts or rejects a badge award |"
+ },
+ {
+ "path": "/lexicons/certified-lexicons/location",
+ "title": "Location",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Location app.certified.location A location record implementing the Astral Location Protocol — a standardized framework for creating, sharing, and verifying location information in decentralized systems. This lexicon is the ATProto implementation of that spec. Locations can be represented in multiple formats: decimal coordinates, GeoJSON, H3 indices, geohashes, etc. Each record includes a Spatial Reference System (SRS) URI so coordinate values are unambiguous. See the location type registry for the full list of supported formats. For the full schema, see app.certified.location in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/certified-lexicons/profile",
+ "title": "Profile",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Profile app.certified.actor.profile A user profile in the Certified ecosystem. This is a singleton record — each account has exactly one, stored with the key self. Includes display name, avatar, banner image, description, website URL, and pronouns. For the full schema, see app.certified.actor.profile in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/certified-lexicons/shared-defs",
+ "title": "Shared Definitions",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Shared Definitions app.certified.defs Common type definitions used across the certified ecosystem. Currently defines the did type, which represents a Decentralized Identifier. The did type is used throughout the certified and hypercerts lexicons to reference accounts and actors. It ensures consistent formatting and validation of DIDs across all record types. For the full schema, see app.certified.defs in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/acknowledgement",
+ "title": "Acknowledgement",
+ "description": "",
+ "section": "Reference",
+ "headings": [
+ "Definition",
+ "Use case: contributor acknowledges inclusion in an activity",
+ "Use case: funder confirms a funding receipt",
+ "Use case: rejecting an evaluation"
+ ],
+ "body": "Acknowledgement org.hypercerts.context.acknowledgement Definition An acknowledgement is a record expressing acceptance or rejection of a relationship. For example, a contributor might acknowledge their inclusion in an activity claim, or a funder might confirm a funding receipt. Acknowledgements are created in the acknowledging actor's own repository, making them independently verifiable. Each acknowledgement references the record being acknowledged, specifies whether it's accepted or rejected, and can include an optional note explaining the decision. This enables consent-based relationships in the hypercerts ecosystem. Rather than assuming that being listed as a contributor or funder implies agreement, acknowledgements make consent explicit and auditable. They also enable dispute resolution — if someone is incorrectly listed, they can create a rejection acknowledgement. Use case: contributor acknowledges inclusion in an activity Alice creates an activity claim about her research project and lists Einstein as a contributor. Einstein needs to confirm he actually participated — otherwise anyone could claim he worked on their project. 1. Alice creates an org.hypercerts.claim.activity record on her PDS, listing Einstein in the contributors array 2. Einstein sees he's been listed (via an indexer or notification) 3. Einstein creates an org.hypercerts.context.acknowledgement on his own PDS with: - subject → strong reference to Alice's activity claim - acknowledged → true - comment → \"Confirming my role as theoretical physics advisor\" If Einstein never actually contributed, he'd set acknowledged to false instead — creating a visible rejection that indexers and applications can surface. No more claiming Einstein co-authored your term paper. Use case: funder confirms a funding receipt A grant platform creates a funding receipt recording that Funder X contributed $5,000 to a project. Funder X wants to confirm the receipt is accurate. 1. The platform creates an org.hypercerts.funding.receipt record 2. Funder X creates an org.hypercerts.context.acknowledgement on their own PDS with: - subject → strong reference to the funding receipt - acknowledged → true - comment → \"Confirmed — payment processed via our grants program\" If the amount or details were wrong, the funder sets acknowledged to false with a comment explaining the discrepancy. Use case: rejecting an evaluation A project lead disagrees with an evaluation of their work and wants to flag it. 1. An evaluator creates an org.hypercerts.context.evaluation referencing the project's activity claim 2. The project lead creates an acknowledgement with: - subject → strong reference to the evaluation - acknowledged → false - comment → \"This evaluation references outdated data — see our updated measurements\" The rejection doesn't delete the evaluation — it creates a counter-signal that applications can use to present both sides. For the full schema, see org.hypercerts.context.acknowledgement in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/activity-claim",
+ "title": "Activity Claim",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Activity Claim org.hypercerts.claim.activity The activity claim is the core hypercert record — the anchor point for all impact tracking. It represents a durable, referenceable statement that a clearly bounded piece of work is planned, ongoing, or completed. An activity claim defines work using four core dimensions: who performed it, what was done, when it happened, and where it took place. By making work precisely scoped and inspectable, activity claims become stable reference points that others can build on. Contributions, attachments, measurements, evaluations, and funding receipts all reference back to an activity claim. The activity claim supports rich descriptions with text formatting, cover images, contributor lists with weights and roles, work scope definitions, time ranges, location references, and rights declarations. It's designed to be flexible enough to represent everything from open-source software maintenance to forest stewardship to research projects. For the full schema, see org.hypercerts.claim.activity in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/attachment",
+ "title": "Attachment",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Attachment org.hypercerts.context.attachment An attachment links supporting documentation to one or more existing records. It can reference files, documents, external URIs, or IPFS content, and can attach to any record type via strong references. Attachments are useful for providing evidence, documentation, or additional context for hypercerts. For example, you might attach a research paper to an activity claim, link a GitHub repository to a software project, or include photos documenting environmental work. Each attachment includes a title, optional description, the URI or content reference, and an array of strong references to the records it supports. For the full schema, see org.hypercerts.context.attachment in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/collection",
+ "title": "Collection",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Collection org.hypercerts.collection A collection groups activity claims into a named set. A single hypercert can belong to multiple collections at the same time — for example, one collection representing a multi-year reforestation program, another curating a funder's portfolio, and a personal \"favorites\" list. Collections make it easy to organize and surface hypercerts in different contexts without duplicating data. Each collection has a title, description, and an array of strong references to the claims it contains. For the full schema, see org.hypercerts.collection in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/contribution",
+ "title": "Contribution",
+ "description": "",
+ "section": "Reference",
+ "headings": [
+ "Contributor Information",
+ "Contribution",
+ "Choosing contribution weights",
+ "Equal split",
+ "Role-based multipliers",
+ "Activity-based (git signals)",
+ "Peer assessment",
+ "Outcome-based",
+ "Composite"
+ ],
+ "body": "Contribution This page covers two related lexicons that work together to represent contributors and their contributions. Contributor Information org.hypercerts.claim.contributorInformation Stores identity information for a contributor: display name, identifier (like a GitHub username or email), and an optional profile image. This record can be created once and referenced from multiple activity claims, making it easy to maintain consistent contributor identity across projects. A contributor doesn't need a full contributorInformation record. The activity claim's contributors array accepts either a strong reference to a contributorInformation record or an inline identity string (typically a DID). Use the inline form for simple cases; use the record when you want a reusable profile with a display name and image. For the full schema, see org.hypercerts.claim.contributorInformation in the lexicon repo. Contribution org.hypercerts.claim.contribution Stores details about a specific contribution, including the contributor's role, a description of what they did, and the timeframe of their work. Like contributor identity, contribution details can be provided inline (as a role string) or as a strong reference to a separate record. The activity claim's contributors array also supports contribution weights to indicate relative effort or impact. Choosing contribution weights Weights are always proportional. The protocol stores them as strings and does not enforce any particular calculation method. How you arrive at the numbers is up to you. The following are just examples — pick whatever approach fits your project, or invent your own. Equal split Every contributor gets weight \"1\". Works well for collaborative work where contributions are hard to separate, or small teams where everyone contributed roughly equally. Example: 4 contributors each with contributionWeight: \"1\" Role-based multipliers Assign a base multiplier per role: | Role | Weight | |------|--------| | Lead/Creator | 6 | | Core contributor | 4 | | Reviewer/Advisor | 2 | | Minor contributor | 1 | Example: Lead author \"6\", two core contributors \"4\" each, one reviewer \"2\", one minor contributor \"1\" Activity-based (git signals) Derive weights from repository data: commits, lines changed, PRs merged, issues closed. Caveat: biased toward code-heavy contributions; does not capture design, coordination, or review work well. Example: Alice (450 commits) \"45\", Bob (350 commits) \"35\", Carol (200 commits) \"20\" Peer assessment Each contributor receives a fixed budget of points (e.g. 100) to distribute among peers (not themselves). Final weight = total points received. Caveat: requires active participation from all contributors. Example: After a round, Alice receives 140 points → \"140\", Bob receives 90 → \"90\", Carol receives 70 → \"70\" Outcome-based Weight by measurable deliverables: features shipped, milestones hit, KPIs moved. Caveat: hardest to quantify fairly; risk of rewarding easily-measured work over important-but-hard-to-measure work. Example: Alice delivered 3 milestones → \"3\", Bob delivered 1 → \"1\" Composite Combine multiple signals — for example, mix role-based multipliers with peer assessment scores. Weight each signal by how much it should matter, then add them up. Example: A team weights roles at 40% and peer scores at 60%. Alice (lead, peer score 140) gets \"57\", Bob (core, peer score 90) gets \"55\", Carol (core, peer score 70) gets \"43\". Weights are stored as strings and do not need to sum to any particular value. These examples all produce relative values — what matters is the ratio between contributors, not the absolute numbers. For the full schema, see org.hypercerts.claim.contribution in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/evaluation",
+ "title": "Evaluation",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Evaluation org.hypercerts.context.evaluation An evaluation is a structured assessment of a hypercert or other record. Third parties — auditors, peer reviewers, grant programs — use evaluations to assess whether claimed work actually happened and had the stated impact. It includes the evaluator's DID, a summary of their assessment, an optional numeric score, and can reference measurements that informed the evaluation. An evaluation can reference multiple records (for example, assessing a collection and all its constituent claims), and can cite specific measurements as evidence. This creates a traceable chain from raw data through measurements to final assessments. For the full schema, see org.hypercerts.context.evaluation in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/funding-receipt",
+ "title": "Funding Receipt",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Funding Receipt org.hypercerts.funding.receipt A funding receipt records a funding payment from one party to another. It's typically created by a facilitator or funding platform, but can be created by either the funder or recipient. Each receipt includes the funder's DID, the recipient's DID, the amount and currency, the payment rail used (like bank transfer, crypto, or grant platform), and an optional transaction ID for verification. Receipts can reference the activity claims or collections they fund, creating a traceable link from funding to impact work. Funding receipts enable transparent funding tracking and make it possible to see who funded what work. They're designed to be simple and flexible enough to represent everything from traditional grants to crypto payments to in-kind contributions. For the full schema, see org.hypercerts.funding.receipt in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/index",
+ "title": "Hypercerts Lexicons",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Hypercerts Lexicons These lexicons define the record types for tracking impact work in the org.hypercerts namespace. The activity claim is the central record — all other types attach to it. | Lexicon | NSID | Description | |---------|------|-------------| | Activity Claim | org.hypercerts.claim.activity | The core hypercert record — who did what, when, where | | Contribution | org.hypercerts.claim.contributorInformationorg.hypercerts.claim.contribution | Contributor identity and contribution details (two lexicons) | | Attachment | org.hypercerts.context.attachment | Supporting documentation — URLs, files, IPFS links | | Measurement | org.hypercerts.context.measurement | Quantitative data attached to a claim | | Evaluation | org.hypercerts.context.evaluation | Third-party assessment of a claim | | Collection | org.hypercerts.collection | Groups activity claims and/or other collections into a project | | Rights | org.hypercerts.claim.rights | Rights associated with a hypercert | | Funding Receipt | org.hypercerts.funding.receipt | Records a funding payment from one party to another | | Acknowledgement | org.hypercerts.context.acknowledgement | Acceptance or rejection of a relationship |"
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/measurement",
+ "title": "Measurement",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Measurement org.hypercerts.context.measurement A measurement records a single quantitative data point related to a hypercert or other record. It captures what was measured (the metric), the unit of measurement, the measured value, and when the measurement was taken. Measurements support methodology traceability through optional fields for method type, method URI (linking to a detailed methodology), and evidence URIs (linking to raw data or supporting documentation). This makes measurements auditable and reproducible. For the full schema, see org.hypercerts.context.measurement in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/hypercerts-lexicons/rights",
+ "title": "Rights",
+ "description": "",
+ "section": "Reference",
+ "headings": [],
+ "body": "Rights org.hypercerts.claim.rights Describes the rights associated with a hypercert — whether it can be sold, transferred, and under what conditions. Rights are defined as a separate record and referenced from an activity claim, so the same rights definition can be reused across multiple claims. For the full schema, see org.hypercerts.claim.rights in the lexicon repo."
+ },
+ {
+ "path": "/lexicons/introduction-to-lexicons",
+ "title": "Introduction to Lexicons",
+ "description": "",
+ "section": "Reference",
+ "headings": [
+ "What is a lexicon?",
+ "Lexicon Categories"
+ ],
+ "body": "Introduction to Lexicons What is a lexicon? An ATProto lexicon is essentially a schema or template that defines what data can be stored and how it should be structured. Think of it like a form with specific fields - it tells you what information is required, what's optional, and what format each piece of data should follow. Lexicon Categories All lexicons follow the principle that \"everything is a claim\" - whether it's a hypercert, a measurement, or an attachment, each represents a verifiable assertion stored on the ATProto network. This creates a composable system where claims can reference and build upon each other while maintaining clear data structures and relationships. Certified Lexicons provide foundational building blocks that can be shared across multiple protocols. These include common data types, standardized location references, profiles, badges, and other universal concepts that extend beyond hypercerts alone. Hypercerts Lexicons contain the core claim types specific to impact tracking. These lexicons define how to structure and relate different types of impact claims. The central record is the activity claim (the hypercert itself), which lives in the org.hypercerts.claim namespace. Supporting records like measurements, attachments, evaluations, and acknowledgements live in the org.hypercerts.context namespace, enabling anyone to add context to existing claims."
+ },
+ {
+ "path": "/reference/faq",
+ "title": "FAQ",
+ "description": "Common questions about building with Hypercerts.",
+ "section": "Reference",
+ "headings": [],
+ "body": "FAQ --- What is a hypercert? A structured digital record of a contribution — who did what, when, where, and with what supporting documentation. You create one by writing an org.hypercerts.claim.activity record to your PDS. See the Quickstart. How is this different from the previous (EVM-based) Hypercerts? The new protocol stores data on AT Protocol instead of purely on-chain. This gives you richer schemas, data portability, and lower costs. On-chain anchoring for funding is planned but not yet implemented. Do I need a blockchain wallet? Not to create or evaluate hypercerts — you only need an account on certified.app or any ATProto provider. A wallet will be needed for on-chain funding once the tokenization layer is built. Can I use my Bluesky account? Yes. Bluesky accounts are ATProto accounts. Your existing DID and identity work with Hypercerts out of the box. Is my data public? Yes. All records are public by default. Do not store sensitive personal information in hypercert records. See the privacy section in Testing & Deployment for guidance on what to include and what to keep off-protocol. Can I delete a hypercert? You can delete records from your account. However, cached copies may persist in indexers temporarily. Once data is published, treat it as potentially permanent. Who can evaluate my hypercert? Anyone with an ATProto account. Evaluations are separate records created by the evaluator, linked to your hypercert via a strong reference. You don't control who evaluates your work. See Working with Evaluations. How do I query hypercerts across the network? Use the Hyperindex GraphQL API at https://api.hi.gainforest.app/graphql. It indexes all hypercert records across the network and supports filtering, search, and real-time subscriptions. How do I fund a hypercert? The on-chain funding layer is not yet implemented. The planned design freezes records before funding to ensure funders know exactly what they are paying for. See Funding & Value Flow. Where do I get help? - GitHub — source code, issues, and discussions - Roadmap — what's being built and what's next"
+ },
+ {
+ "path": "/reference/glossary",
+ "title": "Glossary",
+ "description": "Key terms used in the Hypercerts documentation.",
+ "section": "Reference",
+ "headings": [],
+ "body": "Glossary --- Hypercert A structured digital record of a contribution: who did what, when, where, and with what evidence. The core primitive of the protocol. Technically, a hypercert is an activity claim record with linked contributions, attachments, measurements, and evaluations. Activity claim The central record in the hypercerts data model. Describes the work that was done, when, and in what scope. AT-URI The permanent, globally unique identifier for a record. Looks like at://did:plc:abc123/org.hypercerts.claim.activity/3k7. You'll see these in every API response — they're how records reference each other. DID (Decentralized Identifier) A permanent identifier for a user or organization. Looks like did:plc:abc123xyz. You get one when you create an account on certified.app or Bluesky. Your DID never changes, even if you switch servers or handles. Every record you create carries your DID as the author. Strong reference A reference to another record that includes both the AT-URI and a content hash (CID). Used when one record points to another (e.g., an evaluation referencing an activity claim). The CID makes the reference tamper-evident — if the target record changes, the hash won't match. Evaluation A third-party assessment of a hypercert. Created on the evaluator's own account, not the original author's. Attachment Supporting documentation linked to one or more records — a URL, uploaded file, or IPFS link. Measurement A quantitative observation attached to a hypercert (e.g., \"12 pages written\", \"50 tons CO₂ reduced\"). Contribution Who contributed to a hypercert. Can be as simple as a DID string, or a richer record with display name, image, role, and timeframe. Collection A named group of hypercerts and/or other collections, with an optional weight per item. Each collection has a type (e.g., \"favorites\", \"project\") so the same hypercert can appear in different collections for different purposes. Lexicon A versioned schema that defines the structure of a record type. Lexicons enable interoperability — any app that knows the schema can read the record. See Introduction to Lexicons. Work scope The \"what\" dimension of a hypercert — what work is being claimed. Can be a simple free-text string or a structured CEL expression for machine-evaluable scopes. See Work Scopes. Hyperindex A reference indexer that indexes hypercert records across the network and exposes them via a GraphQL API. Other indexers exist — see Indexers & Discovery. PDS (Personal Data Server) The server where your records are stored. You interact with it through the ATProto API — you don't need to manage it directly. You can use the Hypercerts Foundation's PDS, Bluesky's, or self-host one. Records are portable between PDS instances. Certified The identity provider for the Hypercerts ecosystem. We built Certified to give the ecosystem a unified entry point — one account that works across all Hypercerts applications. When you sign up at certified.app, you get a DID, a PDS, and an embedded wallet. See Account & Identity Setup."
+ },
+ {
+ "path": "/roadmap",
+ "title": "Roadmap",
+ "description": "Development priorities, infrastructure components, and phased delivery plan for Hypercerts v0.2.",
+ "section": "Reference",
+ "headings": [
+ "Infrastructure overview",
+ "Build priorities",
+ "Lexicons 🔵",
+ "Lexicons",
+ "Data layer",
+ "Hypercerts PDS",
+ "Network & streaming layers",
+ "Current approach ⬜",
+ "Future options 🔵",
+ "Moderation layer 🟢",
+ "Evaluators",
+ "Hypercerts Labeller",
+ "Application layer",
+ "Hyperindex AppView 🔵",
+ "Frontends 🟢",
+ "Bridge layer 🔵",
+ "IdentityLink",
+ "StorageLink",
+ "EVM Indexer",
+ "AI agent readiness",
+ "Governance: the Hypercerts Collective",
+ "Lexicon Indexing Requests (LIRs)",
+ "Infrastructure transitions",
+ "Development phases",
+ "Phase 1: Make it work (current)",
+ "Phase 2: Make it right (next)",
+ "Phase 3: Make it fast (future)",
+ "Next steps",
+ "Links & resources"
+ ],
+ "body": "Roadmap This page outlines the infrastructure components, build priorities, and phased delivery plan for Hypercerts v0.2 — a decentralized system for tracking, evaluating, and funding impactful work on the open internet. This roadmap reflects the current plan and is subject to change as we gather feedback from real users and contributors. Status labels indicate where each component stands today. --- Infrastructure overview The Hypercerts infrastructure divides into six functional areas. Each area has a different ownership model: | Color | Meaning | |-------|---------| | 🔵 Blue | Hypercerts Infrastructure — core protocol components this roadmap covers | | 🟢 Green | Hypercerts Collective Infrastructure — community-governed components | | ⬜ Grey | Third-party Infrastructure — leverage existing services, don't rebuild | --- Build priorities | Component | Priority | Status | Owner | |-----------|----------|--------|-------| | Lexicons | 🔴 Critical | Active | Hypercerts Foundation | | Hyperindex AppView | 🔴 Critical | Active | GainForest → Foundation | | IdentityLink | 🟠 High | Live | GainForest → Foundation | | EVM Indexer | 🟠 High | Planned | TBD | | Evaluators | 🟠 High | Active | Hypercerts Collective | | Hypercerts Labeller | 🟠 High | Active | Hypercerts Collective | | AI-Ready Docs | 🟠 High | In Progress | All | | StorageLink | 🟡 Medium | Planned | TBD | | Hypercerts PDS | ⚪ Low | Optional | — | | Hypercerts Relayer / Firehose / Jetstream | ⚪ Low | Future | — | --- Lexicons 🔵 Lexicons Priority: 🔴 Critical · Status: Active · Owner: Hypercerts Foundation Lexicons are the schema definitions that specify the structure of all Hypercerts records on ATProto. They are the \"contracts\" that define how data is structured, validated, and referenced across the network. Without standardized schemas, interoperability between applications would be impossible. *app.certified. — Certified application lexicons* | Lexicon | Purpose | |---------|---------| | app.certified.badge.definition | Badge type definitions | | app.certified.badge.award | Badge awards to users | | app.certified.location | Georeferenced location data | | app.certified.link. | Identity attestations (DID to EVM) | *org.hypercerts. — Hypercerts protocol lexicons | Lexicon | Purpose | |---------|---------| | org.hypercerts.claim.activity | Core hypercert records tracking impact work | | org.hypercerts.claim.evaluation | Evaluations of activities by third parties | | org.hypercerts.claim.measurement | Quantitative data attached to claims | | org.hypercerts.collection | Projects/portfolios grouping multiple activities | | org.hypercerts.claim.attachment | Attachments, reports, documentation | | org.hypercerts.funding.receipt | Funding flow records | Migration:* org.impactindexer.link.attestation will migrate to app.certified.link.. Resources: - Repository: github.com/hypercerts-org/hypercerts-lexicon - Documentation: impactindexer.org/lexicon --- Data layer Hypercerts PDS Priority: ⚪ Low · Status: Optional Personal Data Servers operated by the Hypercerts ecosystem that host user repositories containing hypercert records. Why we're not prioritizing this: Users can use any ATProto PDS (Bluesky's, self-hosted, community-operated). The data layer is solved. However, dedicated Hypercerts PDSs could eventually provide guaranteed uptime for impact-critical data, support for larger blobs, federation across multiple servers, and organizational sovereignty. | PDS | Type | Example users | |-----|------|---------------| | pds.bsky.app | Third-party (Bluesky) | General users | | pds1.certified.app | Optional Hypercerts | climateai.org, daviddao.org, hyperboards.org | | pds2.certified.app | Optional Hypercerts | Additional users | --- Network & streaming layers Current approach ⬜ We subscribe to Bluesky's relay and filter for org.hypercerts. and app.certified. collections. Bluesky's Jetstream provides filtered JSON streams by collection. This is sufficient for current needs. Future options 🔵 Priority: ⚪ Low · Status: Future If needed later, dedicated Hypercerts infrastructure could include: - Hypercerts Relayer — Aggregation service that crawls PDSs hosting hypercert records, validates cryptographic signatures, and provides a unified event stream. - Hypercerts Firehose — WebSocket endpoint providing real-time stream of all hypercert-related commits in CBOR format. - Hypercerts Jetstream — Filtered, JSON-formatted stream allowing consumers to subscribe only to specific collections with reduced bandwidth and lower resource requirements. These are not needed today. Bluesky's relay (bsky.network) and Jetstream handle this. We will revisit if the network outgrows third-party infrastructure. --- Moderation layer 🟢 Community-governed evaluation and labeling infrastructure, managed by the Hypercerts Collective. Evaluators Priority: 🟠 High · Status: Active · Owner: Hypercerts Collective Services producing structured evaluations for hypercert records. Types include AI-pow"
+ },
+ {
+ "path": "/tools/hyperboards",
+ "title": "Hyperboards",
+ "description": "Visual contributor boards on ATProto — see who built what, beautifully.",
+ "section": "Tools",
+ "headings": [
+ "What it does",
+ "How it works",
+ "How weights become tile sizes",
+ "Embedding",
+ "Use cases",
+ "See also"
+ ],
+ "body": "Hyperboards Hyperboards creates visual contributor boards backed by ATProto data. Build a board from your hypercerts, customize the layout, and embed it anywhere. Live at hyperboards.org. Source: github.com/hypercerts-org/hyperboards. What it does Hyperboards turns hypercert data into shareable, embeddable visualizations. Each board is a treemap layout that shows contributors, their roles, and relative weights at a glance. - Treemap layouts — contributors are displayed as proportionally sized tiles, making contribution weights immediately visible - Drag-to-resize editing — adjust contributor tile sizes interactively to reflect their relative contributions - Embeddable — drop a board into any website with a simple iframe - ATProto-native — board data comes directly from signed ATProto records, not a separate database How it works 1. Sign in with your ATProto handle (e.g. yourname.certified.app or yourname.bsky.social) 2. Create a board from the dashboard — select one or more hypercerts to visualize 3. Customize the treemap layout by dragging and resizing contributor tiles 4. Share the board URL or embed it on your website via iframe Because boards pull data from ATProto repositories, the contributor information is cryptographically signed and publicly verifiable. Anyone can inspect the underlying records and confirm who contributed to a hypercert. How weights become tile sizes Hyperboards normalizes contributor weights into proportional tile areas using this formula: For example, if three contributors have weights \"70\", \"20\", and \"10\": | Contributor | Weight | Calculation | Tile Area | |-------------|--------|-------------|-----------| | Alice | 70 | 70 / 100 | 70% | | Bob | 20 | 20 / 100 | 20% | | Carol | 10 | 10 / 100 | 10% | D3 recomputes tile positions from the weights on every render. Missing or invalid weights: Contributors without a contributionWeight default to a weight of 1. There are two fallback layers: (1) if contributionWeight is undefined or null, the string \"1\" is used; (2) if the string cannot be parsed as a number (e.g. empty string), the number 1 is used. Contributors are never excluded — they always appear on the board with at least a weight of 1. Drag-to-resize behavior: When you drag to resize tiles in the editor, this directly updates the contributionWeight stored in the contributor's ATProto activity record on their PDS. There is no separate layout layer — the weight IS the layout. Weights are rounded to one decimal place on save. Choosing weights for visual clarity: For boards with many contributors, using percentage-style weights (summing to 100) makes the visual proportions intuitive. For boards with few contributors, simple multipliers (like \"3\", \"2\", \"1\") work well. For methods to calculate contributor weights, see Choosing contribution weights. Embedding Add a board to any website: Replace BOARD_ID with your board's identifier from the dashboard. Use cases Project pages — Embed a board on your project website to showcase your team. The treemap makes it easy to see who contributed and how much. Funding transparency — Show funders exactly who their contributions support. Contributors are displayed with verifiable ATProto-backed attribution. Impact portfolios — Contributors can link to boards as portable proof of their work. Because the underlying records are signed, the attribution is credible across platforms. Recognition — Publicly acknowledge contributors in a visual, shareable format that is not locked into any single platform. See also - Quickstart — build a hypercert with contributors that Hyperboards can display - Core Data Model — understand the record types behind a board - Scaffold Starter App — another example of building on ATProto with the Hypercerts protocol"
+ },
+ {
+ "path": "/tools/hypercerts-cli",
+ "title": "Hypercerts CLI",
+ "description": "A Go command-line tool for managing hypercerts on ATProto.",
+ "section": "Tools",
+ "headings": [
+ "Install",
+ "Authenticate",
+ "Core commands",
+ "All record types",
+ "Generic operations",
+ "Interactive UI"
+ ],
+ "body": "Hypercerts CLI The Hypercerts CLI (hc) is a command-line tool for managing hypercerts on ATProto. You can use it to: - Create, read, update, and delete all Hypercerts record types (activities, measurements, evaluations, attachments, and more) - Authenticate with any ATProto PDS - Run interactively with a terminal UI or non-interactively with flags for CI/CD - Resolve identities and inspect any record on the network Built using Go on bluesky-social/indigo with interactive forms powered by Charm libraries. Source: github.com/GainForest/hypercerts-cli. Install Choose one of three installation methods: 1. Quick install (recommended): 2. Go install (requires Go 1.23+): 3. Build from source: Authenticate Log in with your ATProto handle and app password: Check your session status: Log out: For CI/CD, set HYPERUSERNAME, HYPERPASSWORD, and optionally ATPPDSHOST as environment variables. Core commands All record types follow the same CRUD pattern. Here's the full workflow using activities as the primary example: Deleting an activity also removes all linked measurements and attachments. Use the -f flag to skip confirmation. All record types Every record type supports create, ls, get, edit, delete with the same flag patterns shown above. | Command | Record Type | Alias | |---------|------------|-------| | hc activity | org.hypercerts.claim.activity | — | | hc measurement | org.hypercerts.context.measurement | hc meas | | hc location | app.certified.location | hc loc | | hc attachment | org.hypercerts.context.attachment | hc attach | | hc rights | org.hypercerts.claim.rights | — | | hc evaluation | org.hypercerts.context.evaluation | hc eval | | hc collection | org.hypercerts.collection | hc coll | | hc contributor | org.hypercerts.claim.contributorInformation | hc contrib | | hc funding | org.hypercerts.funding.receipt | hc fund | | hc workscope | org.hypercerts.workscope.tag | hc ws | The CLI is currently being updated to reflect the latest lexicon namespaces shown above. This migration is currently ongoing. Generic operations Inspect any record on the network using AT-URIs: List all records for a user: List records filtered by collection: Resolve a handle to a DID: Interactive UI When you run commands without flags, the CLI launches interactive forms with keyboard navigation, live preview cards during activity creation, multi-select for bulk deletes, and select-or-create patterns when linking records. The interactive UI requires a terminal that supports ANSI escape codes. For CI/CD environments, always use flags to run commands non-interactively."
+ },
+ {
+ "path": "/tools/hyperindex",
+ "title": "Hyperindex",
+ "description": "A Go ATProto indexer that indexes hypercert records and exposes them via GraphQL.",
+ "section": "Tools",
+ "headings": [
+ "How it works",
+ "Quick start",
+ "Register lexicons",
+ "Query via GraphQL",
+ "Endpoints",
+ "Running with Docker",
+ "Learn more"
+ ],
+ "body": "Hyperindex Hyperindex (hi) is a Go AT Protocol AppView server that indexes records and exposes them via GraphQL. Use it to: - Index all hypercert-related records from the ATProto network in real time - Query indexed data through a typed GraphQL API - Backfill historical records from any user or the entire network - Run your own indexer for full control over data availability and query performance Built in Go on bluesky-social/indigo. Source: github.com/hypercerts-org/hyperindex. How it works Hyperindex connects to the AT Protocol network via Jetstream (a real-time event stream). It watches for records matching your configured lexicons, parses them, and stores them in a queryable database (SQLite or PostgreSQL). It then exposes a GraphQL API for querying the indexed data. Quick start Open http://localhost:8080/graphiql/admin to access the admin interface. Register lexicons Lexicons define the AT Protocol record types you want to index. Register them via the Admin GraphQL API at /graphiql/admin: Or place lexicon JSON files in a directory and set the LEXICON_DIR environment variable. For hypercerts, you would register the org.hypercerts.claim.* lexicons — see Introduction to Lexicons for the full list. Query via GraphQL Access your indexed data at /graphql: Endpoints | Endpoint | Description | |---|---| | /graphql | Public GraphQL API | | /graphql/ws | GraphQL subscriptions (WebSocket) | | /admin/graphql | Admin GraphQL API | | /graphiql | GraphQL playground (public API) | | /graphiql/admin | GraphQL playground (admin API) | | /health | Health check | | /stats | Server statistics | Running with Docker Learn more - GitHub repository — source code, issues, and documentation - Indexers & Discovery — how indexers fit into the Hypercerts architecture - Building on Hypercerts — integration patterns for platforms and tools"
+ },
+ {
+ "path": "/tools/scaffold",
+ "title": "Scaffold Starter App",
+ "description": "A Next.js starter app for building on ATProto with the Hypercerts protocol.",
+ "section": "Tools",
+ "headings": [
+ "Tech Stack",
+ "What the app does",
+ "Sign in with ATProto",
+ "Home screen",
+ "Create a hypercert",
+ "Browse your hypercerts",
+ "Edit your profile",
+ "Environment variables",
+ "Run it locally",
+ "Architecture",
+ "OAuth Flow",
+ "Local Loopback Development",
+ "Server-Side Data Boundary",
+ "Constellation Backlinks",
+ "Project structure"
+ ],
+ "body": "Scaffold Starter App The Hypercerts Scaffold is a working Next.js app that demonstrates how to build on ATProto with the Hypercerts protocol. It handles OAuth authentication, profile management, and the full hypercert creation workflow — from basic claims through attachments, locations, measurements, and evaluations. Live at hypercerts-scaffold.vercel.app. Source: github.com/hypercerts-org/hypercerts-scaffold-atproto. The repo is also indexed on deepwiki if you want to dive deeper into the docs and setup. Tech Stack | Category | Technology | |----------|------------| | Framework | Next.js 16 (App Router), React 19, TypeScript | | Styling | Tailwind CSS 4, shadcn/ui (Radix primitives) | | State Management | TanStack React Query v5 | | Auth / Protocol | AT Protocol OAuth, @atproto/oauth-client-node | | Infrastructure | Redis (session + OAuth state storage) | What the app does Sign in with ATProto Enter your handle (e.g. yourname.certified.app or yourname.bsky.social) and the app redirects you to your PDS for OAuth authorization. Once approved, you're signed in with a session tied to your DID. Alternatively, the sign-in dialog has an Email tab (visible when NEXTPUBLICEPDSURL is configured). Entering your email authenticates via the ePDS — if no account is registered with that email, the ePDS creates one for you automatically. !Scaffold sign-in screen showing handle input field The sign-in screen. Enter your ATProto handle to authenticate via OAuth. Home screen After signing in, the home screen shows your active session — your DID, display name, and handle. From here you can create a new hypercert or view your existing ones. !Scaffold home screen showing session info and action buttons The authenticated home screen with session details and quick actions. Create a hypercert The creation flow is a 5-step wizard with a sidebar stepper that tracks your progress: Step 1 — Basic info. Title, description, work scope tags, start and end dates, and an optional cover image. This creates the core org.hypercerts.claim.activity record on your PDS. !Hypercert creation form showing title, description, work scope, and date fields Step 1: Define the basic claim — what work was done, when, and in what scope. Step 2 — Attachments. Attach supporting documentation — URLs, files, or descriptions that back up the claim. !Evidence form for attaching supporting documentation Step 2: Attach supporting documentation to back up the claim. Step 3 — Location. Add geographic context to the work — where it happened. !Location form for adding geographic context Step 3: Add location data to anchor the work geographically. Step 4 — Measurements. Add quantitative data — metrics, values, and measurement methods that make the impact concrete. !Measurement form for adding quantitative impact data Step 4: Add measurements to quantify the impact. Step 5 — Evaluations. Add third-party assessments of the work. !Evaluation form for adding third-party assessments Step 5: Add evaluations from third-party assessors. Step 6 — Done. Review the completed hypercert and create another or view your collection. !Completion screen showing the finished hypercert Step 6: The hypercert is created and stored on your PDS. Browse your hypercerts The hypercerts page shows all your claims in a card grid. Each card displays the title, description, creation date, work scope tags, and cover image. Click any card to view its full details. !Grid of hypercert cards showing titles, descriptions, and work scope tags Your hypercerts displayed as cards with metadata and work scope tags. Edit your profile The profile page lets you update your Certified profile — display name, bio, pronouns, website, avatar, and banner image. Changes are written directly to your PDS. !Profile editing form with display name, bio, and avatar fields Edit your Certified profile. Changes are stored on your PDS. Environment variables | Variable | Description | |----------|-------------| | NEXTPUBLICBASEURL | App URL (http://127.0.0.1:3000 for local) | | ATPROTOJWKPRIVATE | OAuth private key (generate with pnpm run generate-jwk) | | REDISHOST | Redis hostname | | REDISPORT | Redis port | | REDISPASSWORD | Redis password | | NEXTPUBLICPDSURL | PDS URL (e.g. https://pds-eu-west4.test.certified.app) | | NEXTPUBLICEPDSURL | ePDS URL (e.g. https://epds1.test.certified.app) (optional; required only for email/passwordless login) | Redis is the default session store, but you can use any persistent storage (Supabase, Postgres, DynamoDB, etc.). You just need to implement the NodeSavedStateStore and NodeSavedSessionStore interfaces from @atproto/oauth-client-node. See lib/redis-state-store.ts for the reference implementation. Run it locally 1. Clone and install: 2. Configure environment: 3. Start Redis (for session storage): 4. Run the dev server: Open http://127.0.0.1:3000. Requires Node.js 20+ and pnpm. > Note: Use 127.0.0.1 not localhost for local development. ATProto OAuth requires IP-based loopback addresses per R"
+ }
+]
diff --git a/styles/globals.css b/styles/globals.css
index 2e2ea24..6feffc6 100644
--- a/styles/globals.css
+++ b/styles/globals.css
@@ -1665,6 +1665,29 @@ html.dark .search-overlay {
margin-top: 4px;
}
+.search-result-snippet {
+ font-size: 12px;
+ color: var(--color-text-secondary);
+ line-height: 1.5;
+ margin-top: 2px;
+ overflow: hidden;
+ display: -webkit-box;
+ -webkit-line-clamp: 2;
+ -webkit-box-orient: vertical;
+}
+
+.search-result-snippet mark {
+ background: oklch(0.85 0.15 85);
+ color: inherit;
+ border-radius: 2px;
+ padding: 0 2px;
+}
+
+html.dark .search-result-snippet mark {
+ background: oklch(0.45 0.12 85);
+ color: var(--color-text-primary);
+}
+
/* Mobile close button (hidden by default, shown in mobile media query) */
.sidebar-close-btn {
display: none;