This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Crier is a CLI tool for cross-posting content to multiple platforms. It reads markdown files with YAML or TOML front matter and publishes them via platform APIs. Designed to be used with Claude Code for automated content distribution.
# Install in development mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=crier
# Run a single test
pytest tests/test_file.py::test_function -v
# Lint
ruff check src/
# Format check
ruff format --check src/CLI Layer (cli.py): Click-based commands that orchestrate the workflow:
init— Interactive setup wizardpublish— Publish to platforms (supports--dry-run,--profile,--manual,--rewrite,--auto-rewrite,--batch,--json,--schedule,--thread,--no-check,--strict)check— Pre-publish content validation (supports--to,--all,--json,--strict,--check-links)status— Show publication status for filesaudit— Check what's missing (supports bulk operations with filters,--batch,--json,--include-archived,--check,--failed,--retry)search— Search and list content with metadata (supports--tag,--since,--until,--sample,--json)delete— Delete content from platforms (--from,--all,--dry-run)archive/unarchive— Exclude/include content from audit --publishschedule— Manage scheduled posts (list,show,cancel,run)stats— View engagement statistics (--refresh,--top,--since,--json,--compare,--export)feed— Generate RSS/Atom feeds from content files (--format,--output,--limit,--tag)doctor— Validate API keys (--json)config— Manage API keys, profiles, and content paths (set,get,show,profile,path,llm)skill— Manage Claude Code skill installation (deprecated -- use crier plugin from queelius-plugins marketplace)register/unregister— Manual registry managementlist— List articles on a platform (default: registry;--remotefor live API)mcp— Start MCP server for Claude Code integration (--httpfor SSE mode)
LLM Module (llm/): Optional auto-rewrite using OpenAI-compatible APIs:
provider.py: AbstractLLMProviderinterface andRewriteResultdataclassopenai_compat.py:OpenAICompatProviderfor OpenAI, Ollama, Groq, etc.
Platform Abstraction (platforms/):
base.py: AbstractPlatformclass defining the interface (publish,update,list_articles,get_article,delete,get_stats,publish_thread) and core data classes (Article,PublishResult,DeleteResult,ArticleStats,ThreadPublishResult)base.pyalso providesretry_request()— centralized HTTP retry with exponential backoff, Retry-After header parsing, and retryable/non-retryable status code classification- Platform capabilities:
supports_delete,supports_stats,supports_threads,thread_max_posts - Each platform implements the
Platforminterface; all useself.retry_request()instead of directrequests.*()calls _discover_package_platforms()in__init__.pyauto-discovers built-in platforms by scanning.pyfiles in the package (no hardcoded imports)_discover_user_platforms()loads user plugins from~/.config/crier/platforms/; user plugins override built-insPLATFORMSregistry in__init__.pymaps platform names to classes (built-in + user plugins)- Backward compat:
globals()injection ensuresfrom crier.platforms import DevToetc. still work
Scheduler (scheduler.py): Content scheduling for future publication:
ScheduledPostdataclass for scheduled post data- Schedule storage in
<site_root>/.crier/schedule.yaml - Natural language time parsing via
dateparser
Checker (checker.py): Pre-publish content validation:
CheckResultandCheckReportdataclasses for validation findingscheck_article()— Pure validation (no I/O): front matter, content, platform-specific checkscheck_file()— I/O wrapper that reads file and callscheck_article()check_external_links()— Optional external URL validation via HEAD requests- Configurable severity overrides in
~/.config/crier/config.yamlchecks:section - Integrated into
publish(pre-publish gate) andaudit(filter with--check)
Threading (threading.py): Thread splitting for social platforms:
split_into_thread()splits content by manual markers, paragraphs, or sentencesformat_thread()adds thread indicators (numbered, emoji, or simple style)- Used by Bluesky and Mastodon
publish_thread()implementations
Platform Categories (13 total):
- Blog: devto, hashnode, medium, ghost, wordpress
- Newsletter: buttondown
- Social: bluesky, mastodon, linkedin, threads, twitter (copy-paste mode)
- Announcement: telegram, discord
Config (config.py): Single global configuration:
- Global (
~/.config/crier/config.yaml): ALL configuration -- API keys, profiles, content paths, site settings site_rootkey locates the content project directory (e.g.,~/github/repos/my-blog)- No local
.crier/config.yaml-- no merge logic - Precedence: global config < environment variables (
CRIER_{PLATFORM}_API_KEY,CRIER_DB) < CLI args - Supports composable profiles (profiles can reference other profiles)
Example global config structure:
site_root: ~/github/repos/my-blog
site_base_url: https://example.com
content_paths:
- content/post
- content/note
file_extensions:
- .md
- .markdown
exclude_patterns:
- _drafts/*
- _index.md
default_profile: blogs
platforms:
devto:
api_key: sk-...
bluesky:
handle: user.bsky.social
app_password: ...
profiles:
blogs:
platforms: [devto, hashnode, medium]
social:
platforms: [bluesky, mastodon]
checks:
missing-tags: disabled
missing-date: error
llm:
api_key: sk-...
model: gpt-4o-miniFeed (feed.py): RSS/Atom feed generation from content files:
generate_feed()— Builds RSS 2.0 or Atom XML from markdown files usingfeedgen_collect_items()— Parses files and applies tag/date filters- Reuses
parse_markdown_file(),get_content_date(),get_content_tags()
Registry (registry.py): SQLite database at ~/.config/crier/crier.db (global, single file for all projects).
- Slug primary key derived from title via
python-slugify(not canonical_url). Slug is stable; if the title changes, the slug stays. canonical_urlis optional metadata, not the identity. A unique index allows lookup but it can beNULL.- No content hashes. Change detection was removed entirely. If content is outdated, re-publish manually.
_resolve_slug(conn, key)is the dispatcher for the dual-key API: every public function accepts either a slug or a canonical_url. It tries the cheap primary-key lookup first, then the canonical_url unique index.get_or_create_slug(title, canonical_url, source_file)finds or creates an article entry. Used everywhere a publication is recorded.update_article_metadata(slug, title, source_file, canonical_url, section)is the public wrapper for editing article metadata. CLI'slinkcommand uses this; do NOT import the private_update_article_metadatafrom outsideregistry.py.record_failure()/get_failures(): failure rows haveplatform_id IS NULL. Most queries filterWHERE platform_id IS NOT NULL AND deleted_at IS NULLto get "real" publications; remember this when writing new SQL against the registry.- UPSERT on
record_publication/record_thread_publication/save_stats/record_failure. UPSERT (notINSERT OR REPLACE) is required becauseINSERT OR REPLACEdeletes and reinserts the row, which cascade-deletes dependent stats and silently resets unspecified columns. The UPSERT clauses explicitly enumeratedeleted_at = NULL,last_error = NULL,is_thread,thread_ids,posted_contentso re-publishing a soft-deleted post resurrects it cleanly. See regression tests inTestDeletionPreservesHistory. CRIER_DBenv var overrides DB path (used for test isolation).- Module-level
_connectioncache;reset_connection()is required between tests with different DBs. - SQLite tables:
articles,publications,stats,schema_version. WAL mode, foreign keys ON. - YAML migration:
migrate_yaml_to_sqlite(yaml_path, db_path)migrates a v2 YAML registry to the SQLite schema; renames the YAML to.bak.
MCP Server (mcp_server.py, ~1100 lines): Full CLI parity for Claude Code via Model Context Protocol.
- Started via
crier mcp(stdio) orcrier mcp --http(SSE) - 17 tools in 4 categories:
- Registry:
crier_query,crier_missing,crier_article,crier_publications,crier_record,crier_failures,crier_summary,crier_sql - Content:
crier_search,crier_check - Actions:
crier_publish,crier_delete,crier_archive - Platform:
crier_list_remote,crier_doctor,crier_stats,crier_stats_refresh
- Registry:
- 3 resources:
crier://schema,crier://config(sanitized),crier://platforms(capabilities + modes) - Two-step confirmation for destructive ops (
crier_publish,crier_delete):- Step 1: call without
confirmation_tokento get a preview + token (5-min TTL) - Step 2: call with
confirmation_tokento execute - Critical invariant: step 2 treats the token as source of truth. All parameters (file, platform, rewrite_content, key, target_platforms) come from the token. Caller args on step 2 are ignored. This prevents a token-substitution bypass where a caller could get a token for operation A and use it to authorize operation B. See
_create_token/_consume_tokeninmcp_server.py.
- Step 1: call without
crier_sqlruns queries inside aSAVEPOINT crier_sql_guardthat is always rolled back, so even non-SELECT statements have no effect (defense in depth on top ofstartswith("SELECT")).- All tools return dicts (FastMCP serializes them); validation errors return
{"error": "..."}. - Tools use lazy imports (
from .X import Yinside functions) for fast MCP startup and to avoid circular imports. - Built on
mcp.server.fastmcp.FastMCP.
Converters (converters/markdown.py): Parses markdown files with YAML or TOML front matter into Article objects. Automatically resolves relative links (e.g., /posts/other/) to absolute URLs using site_base_url so they work on cross-posted platforms.
Utils (utils.py): Shared pure utility functions (no CLI dependencies):
truncate_at_sentence()— Smart text truncation at sentence/word boundariesfind_content_files()— Discover content files using config paths/patternsparse_date_filter()— Parse relative/absolute date stringsget_content_date()/get_content_tags()— Extract front matter metadata
Rewrite (rewrite.py): Auto-rewrite orchestration for platform content adaptation:
auto_rewrite_for_platform()— LLM retry loop with configurable retries and truncation fallbackAutoRewriteResultdataclass for structured success/failure results
Skill (skill.py): Claude Code skill installation (deprecated). Loads SKILL.md from package resources and installs to ~/.claude/skills/crier/. Superseded by the crier Claude Code plugin available from the queelius-plugins marketplace.
Crier Plugin (separate repo at ~/github/alex-claude-plugins/crier/): the user-facing Claude Code integration. Composed of:
skills/crier/SKILL.md: judgment context (rewrite voice, platform culture). Intentionally short (~90 lines); MCP tools are self-describing so the skill doesn't repeat the CLI reference.commands/crier.md: the/crierslash command for interactive workflows.agents/cross-poster.md: autonomous bulk-publishing agent. Calls MCP tools directly (not Bash).agents/auditor.md: read-only analysis agent (gap analysis, performance review, staleness, failure triage)..mcp.json: registers thecrier mcpstdio server with the plugin.
- Dry run mode: Preview before publishing with
--dry-run - Publishing profiles: Group platforms (e.g.,
--profile blogs) - Publication tracking: SQLite registry tracks what's been published where (slug-keyed)
- MCP server:
crier mcpexposes registry to Claude Code for queries and automation - Audit & bulk publish: Find and publish missing content with
audit --publish - Bulk operation filters:
--only-api— Skip manual/import platforms--long-form— Skip short-form platforms (bluesky, mastodon, twitter, threads)--tag <tag>— Only include content with matching tags (case-insensitive, OR logic)--sample N— Random sample of N items--include-archived— Include archived content--since/--until— Date filtering (supports1d,1w,1m,1yorYYYY-MM-DD)
- Manual mode: Copy-paste mode for platforms without API access (
--manualorapi_key: manual) - Import mode: URL import for platforms like Medium (
api_key: import) - Rewrites: Custom short-form content with
--rewritefor social platforms - Auto-rewrite: LLM-generated rewrites with
--auto-rewrite(requires LLM config) - Batch mode: Non-interactive automation with
--batch(implies--yes --json, skips manual platforms) - JSON output: Machine-readable output with
--jsonfor CI/CD integration - Doctor: Validate all API keys work (
--jsonfor scripting) - RSS/Atom feeds: Generate feeds from content with
crier feed(--format atom,--output,--limit,--tag) - Retry & rate limiting: All platform API calls use centralized retry with exponential backoff (429, 502-504, timeouts)
- Error tracking: Failed publications are recorded and can be retried with
audit --retry - SQLite registry: WAL-mode SQLite with slug primary keys (no YAML, no content hashes)
- Stats comparison:
crier stats --compareshows cross-platform engagement side-by-side - Relative link resolution: Converts relative links (
/posts/other/,../images/) to absolute URLs usingsite_base_url - Delete/Archive: Remove content from platforms (
crier delete) or exclude from audit (crier archive) - Scheduling: Schedule posts for future publication with
--scheduleorcrier schedulecommands - Analytics: Track engagement stats across platforms with
crier stats - Threading: Split long content into threads for Bluesky/Mastodon with
--thread
Batch mode (--batch): Fully automated, non-interactive publishing for CI/CD:
# Batch mode implies --yes --json, skips manual/import platforms
crier publish article.md --to devto --to bluesky --batch
crier audit --publish --batch --long-formJSON output (--json): Machine-readable output for parsing:
crier publish article.md --to devto --json
crier audit --jsonQuiet mode (--quiet): Suppress non-essential output for scripting:
# Quiet mode suppresses progress/info messages
crier publish article.md --to devto --quiet
crier audit --publish --yes --quiet
crier search --tag python --quietConfig access (config get): Read config values programmatically:
crier config get llm.model
crier config get platforms.devto.api_key
crier config get site_base_url --jsonNon-interactive flags:
--yes/-y— Skip confirmation prompts (available onpublish,audit --publish,register)--quiet/-q— Suppress non-essential output (available onpublish,audit,search)
Auto-rewrite (--auto-rewrite): LLM-generated short-form content:
# Configure LLM first (see LLM Configuration below)
crier publish article.md --to bluesky --auto-rewrite
# Preview rewrite with dry-run (invokes LLM, shows preview with char budget)
crier publish article.md --to bluesky --auto-rewrite --dry-run
# Disable auto-rewrite explicitly
crier publish article.md --to bluesky --no-auto-rewrite
# Retry up to 3 times if output exceeds character limit
crier publish article.md --to bluesky --auto-rewrite --auto-rewrite-retry 3
# Or use short form: -R 3
# Truncate at sentence boundary if all retries fail
crier publish article.md --to bluesky --auto-rewrite -R 3 --auto-rewrite-truncate
# Override temperature (0.0-2.0, higher=more creative)
crier publish article.md --to bluesky --auto-rewrite --temperature 1.2
# Override model for this publish
crier publish article.md --to bluesky --auto-rewrite --model gpt-4o| Code | Meaning |
|---|---|
| 0 | Success - all operations completed |
| 1 | Failure - operation failed or validation error |
| 2 | Partial - some operations succeeded, some failed |
# Check exit code in scripts
crier publish article.md --to devto --batch
echo "Exit code: $?"
# Example: retry on partial failure
crier audit --publish --batch
if [ $? -eq 2 ]; then
echo "Some platforms failed, retry needed"
fiFor --auto-rewrite to work:
Simplest: Just have OPENAI_API_KEY env var set (defaults to gpt-4o-mini).
Or configure in ~/.config/crier/config.yaml:
# Minimal (defaults to OpenAI + gpt-4o-mini)
llm:
api_key: sk-...
# Full config for Ollama/other providers
llm:
base_url: http://localhost:11434/v1
model: llama3
# Full config with retry and truncation defaults
llm:
api_key: sk-...
base_url: https://api.openai.com/v1
model: gpt-4o-mini
temperature: 0.7 # Default: 0.7 (0.0-2.0, higher=more creative)
retry_count: 0 # Default: 0 (no retries)
truncate_fallback: false # Default: false (no hard truncation)Set config via CLI:
crier config llm set temperature 0.9
crier config llm set retry_count 3
crier config llm set truncate_fallback trueView and test LLM config:
# View current LLM configuration
crier config llm show
# Test the LLM connection with a simple request
crier config llm testEnvironment variables (override config):
OPENAI_API_KEY— API key (auto-defaults to OpenAI endpoint + gpt-4o-mini)OPENAI_BASE_URL— Custom endpoint (e.g.,http://localhost:11434/v1for Ollama)
Filter order: path → date → platform mode → content type → tag → sampling
Bulk operation filters:
--only-api— Skip manual/import platforms--long-form— Skip short-form platforms (bluesky, mastodon, twitter, threads)--tag <tag>— Only include content with matching tags (case-insensitive, OR logic)--sample N— Random sample of N items--since/--until— Date filtering (supports1d,1w,1m,1yorYYYY-MM-DD)--date-source— Filter byfrontmatter(default) ormtime
# Fully automated batch mode
crier audit --publish --batch --long-form
# Typical bulk publish
crier audit --publish --yes --only-api --long-form
# Filter by tag (only technical posts)
crier audit --tag python --tag algorithms --only-api --publish --yes
# Sample recent content
crier audit --since 1m --sample 10 --publish --yes
# Date range
crier audit --since 2025-01-01 --until 2025-01-31 --publish --yes
# Path + filters
crier audit content/post --since 1w --only-api --long-form --publish --yes
# Combine tag filter with other filters
crier audit --tag machine-learning --since 1m --long-form --publish --yesSearch and explore content without publishing using crier search:
# List all content
crier search
# Filter by tag
crier search --tag python
# Filter by date
crier search --since 1m
# Combine filters
crier search content/post --tag python --since 1w
# JSON for scripting
crier search --tag python --json | jq '.results[].file'
# Sample random posts
crier search --sample 5JSON output includes: file, title, date, tags, word count.
# Delete from specific platform
crier delete article.md --from devto
# Delete from all platforms
crier delete article.md --all
# Archive (exclude from audit --publish)
crier archive article.md
# Unarchive
crier unarchive article.md
# Include archived in audit
crier audit --include-archived# Schedule a post for later
crier publish article.md --to devto --schedule "tomorrow 9am"
# Manage scheduled posts
crier schedule list
crier schedule show ID
crier schedule cancel ID
crier schedule run # Publish all due postsSchedule data stored in <site_root>/.crier/schedule.yaml.
# Stats for all content
crier stats
# Stats for specific file
crier stats article.md
crier stats article.md --refresh # Force refresh from API
# Top articles by engagement
crier stats --top 10
crier stats --since 1m --json
# Filter by platform
crier stats --platform devtoStats cached in registry for 1 hour. Platforms with stats: devto (views, likes, comments), bluesky (likes, comments, reposts), mastodon (likes, comments, reposts), linkedin (likes, comments), threads (views, likes, replies, reposts).
# Compare engagement across platforms for same content
crier stats --compare
# Export stats to CSV
crier stats --export csv# Generate RSS feed to stdout
crier feed
# Write to file
crier feed --output feed.xml
# Atom format
crier feed --format atom
# Filter and limit
crier feed --limit 10 --tag python
crier feed --since 1m --until 1wRequires site_base_url to be configured. Uses feedgen library for valid RSS 2.0 and Atom XML.
# View failed publications
crier audit --failed
# Re-attempt failed publications
crier audit --retry
# Preview what would be retried
crier audit --retry --dry-run
# JSON output for scripting
crier audit --failed --jsonFailed publications are automatically recorded in the registry with error details and timestamp. Successful re-publish clears the error.
# Split content into thread
crier publish article.md --to bluesky --thread
# Thread styles
crier publish article.md --to mastodon --thread --thread-style numbered # 1/5, 2/5...
crier publish article.md --to bluesky --thread --thread-style emoji # 🧵 1/5...
crier publish article.md --to mastodon --thread --thread-style simple # No prefixThread splitting priority: manual markers (<!-- thread -->) → paragraph boundaries → sentence boundaries. Supported platforms: bluesky, mastodon.
# Check a single file
crier check article.md
# Check with platform context
crier check article.md --to bluesky --to devto
# Check all content
crier check --all
# Strict mode: warnings become errors
crier check article.md --strict
# Check external links (slow, opt-in)
crier check article.md --check-links
# JSON output
crier check article.md --jsonChecks performed:
- Front matter: missing-title (error), missing-date (warning), future-date (info), missing-tags (warning), empty-tags (warning), title-length (warning), missing-description (info)
- Content: empty-body (error), short-body (warning), broken-relative-links (warning), image-alt-text (info)
- Platform-specific: bluesky-length (warning), mastodon-length (warning), devto-canonical (info)
- External: broken-external-link (warning, opt-in with
--check-links)
Publish integration: Pre-publish checks run automatically. Use --no-check to skip, --strict to block on warnings.
Audit integration: Use --check with --publish to skip files that fail validation.
Configure severity overrides in ~/.config/crier/config.yaml:
checks:
missing-tags: disabled # Don't care about tags
missing-date: error # Promote to error
short-body: disabled # Allow short postsThese conventions are not visible from reading any single file. Violating them will cause regressions.
Token-as-source-of-truth (MCP destructive ops). In crier_publish and crier_delete, step 2 reads ALL operation parameters from the consumed token. Caller arguments on step 2 are intentionally ignored. This prevents a token-substitution bypass. If you add a new parameter to step 1, you MUST add it to the token's details dict and read it back in step 2. See _create_token / _consume_token in mcp_server.py and the test_*_step2_token_overrides_caller_args regression tests.
_resolve_slug dual-key dispatch. Every public registry function that takes a "key" parameter (named canonical_url for backwards compat) actually accepts either a slug or a canonical_url. Internally it goes through _resolve_slug(conn, key) which tries the slug primary-key lookup first. Do not assume the parameter is one or the other; use it as opaque.
Failure rows masquerade as publications. record_failure writes a row to the publications table with platform_id IS NULL. SQL queries that want "real" publications must filter WHERE platform_id IS NOT NULL AND deleted_at IS NULL. Forgetting this filter will count failed attempts as successful publications.
UPSERT, not INSERT OR REPLACE. INSERT OR REPLACE triggers ON DELETE CASCADE on the stats table and silently resets unlisted columns. Use INSERT ... ON CONFLICT DO UPDATE SET and explicitly list every column that should reset on conflict (deleted_at = NULL, last_error = NULL, etc.). Locked in by TestDeletionPreservesHistory regression tests.
Lazy imports in mcp_server.py. Top-level imports are kept minimal so MCP startup is fast (tool-list time matters). Most module imports happen inside tool functions. This is intentional, not laziness.
Article reconstruction via dataclasses.replace. When applying a rewrite to an Article, use _apply_rewrite(article, content) (in mcp_server.py) or dataclasses.replace(article, body=..., is_rewrite=True) directly. Manually constructing Article(...) will silently drop fields if Article ever grows new attributes.
Short-form platform detection via class attribute. is_short_form_platform(name) reads PLATFORMS[name].is_short_form (a class attribute on the Platform subclass). User plugins opt in by setting is_short_form = True on their class. Do NOT add a hardcoded set in config.py.
Registry path is global, content paths are per-site. get_db_path() returns ~/.config/crier/crier.db (overridable via CRIER_DB). get_content_paths() returns paths relative to get_project_root() (which is site_root from config, not CWD). Don't conflate them.
No .crier/registry.yaml. The registry is SQLite. Old YAML registries can be migrated via migrate_yaml_to_sqlite() from registry.py.
- Create
platforms/newplatform.pyimplementing thePlatformabstract class - Set class attributes:
name,description,max_content_length,supports_delete,supports_stats,supports_threads - Implement required methods:
publish,update,list_articles,get_article - Use
self.retry_request(method, url, **kwargs)instead of directrequests.*()calls - Optionally implement:
delete→DeleteResult,get_stats→ArticleStats,publish_thread→ThreadPublishResult - Register in
platforms/__init__.pyby adding toPLATFORMSdict - Update README.md with API key format
Users can add custom platforms without modifying the crier source:
- Create
~/.config/crier/platforms/directory - Drop a
.pyfile implementingPlatformfromcrier.platforms.base - Set
nameclass attribute (used as platform identifier) - Implement required methods:
publish,update,list_articles,get_article - Configure API key:
crier config set platforms.<name>.api_key <key>
User plugins are auto-discovered at import time. If a user plugin has the same name as a built-in, the user plugin wins. Files starting with _ are skipped. Broken plugins warn but don't crash.
Discovery is handled by _discover_user_platforms() in platforms/__init__.py, which scans USER_PLATFORMS_DIR = Path.home() / ".config" / "crier" / "platforms". Multiple Platform subclasses per file are supported. If the name attribute is not overridden (still "base"), the lowercase class name is used instead.
Tests are in tests/ with 1212 tests covering config, registry, converters, CLI, platforms, scheduler, stats, threading, checker, utils, rewrite, feed, skill, MCP (62 tests), and plugin discovery.
Running tests:
pytest # All tests
pytest tests/test_cli.py -v # Single file
pytest -k "test_publish" -v # By name pattern
pytest --cov=crier # With coverageKey fixtures (in conftest.py):
sample_article— Pre-builtArticleobjectsample_markdown_file— Temp markdown file with front mattertmp_config— Isolated config environment (patchesDEFAULT_CONFIG_FILE)tmp_registry— Isolated SQLite registry (CRIER_DBenv var +reset_connection()+init_db())mock_env_api_key— Factory to setCRIER_{PLATFORM}_API_KEYenv varsconfigured_platforms— Config with devto, bluesky, twitter (manual), profiles
Test isolation for registry: Every test that touches the registry MUST set CRIER_DB to a temp path, call reset_connection(), and init_db(). The tmp_registry fixture handles this. Local overrides exist in test_stats.py and test_threading.py.
Platform tests mock requests calls rather than hitting real APIs.