Monitor Twitch streams, transcribe with Whisper on Apple Silicon, and search/clip transcripts.
No Twitch API key required — stream detection and downloading uses yt-dlp.
- Python 3.11+
- Apple Silicon Mac (M1/M2/M3+) — transcription uses mlx-whisper on the Apple GPU
- PostgreSQL with
pg_trgmextension - ffmpeg + yt-dlp (
brew install ffmpeg yt-dlp)
uv syncEdit config.yaml:
streamers:
- streamer_name
database:
url: "postgresql://user:pass@localhost/dbname"
transcription:
# Available models (downloaded automatically on first use):
# tiny, base, small, medium — fast, lower quality
# large-v3 — best quality
# large-v3-turbo — recommended: near large-v3 quality, much faster
# distil-large-v3 — English-only, fastest at high quality
model: "large-v3-turbo"# First-time setup
twitch-indexer db init
twitch-indexer streamers --help
# Start monitoring (TUI)
twitch-indexer start
# Start without TUI
twitch-indexer start --no-tui
# Search transcripts
twitch-indexer search "funny moment"
twitch-indexer search "clip" --fuzzy # typo-tolerant
twitch-indexer search "hello" -s streamer # filter by streamer
# Generate a clip from a search result
twitch-indexer clip <segment_id> -o clip.mp4
# Database
twitch-indexer db init # create tables
twitch-indexer db migrate # run migrations
twitch-indexer db status # show countsModels are from mlx-community and downloaded
automatically to ~/.cache/huggingface/ on first use.
| Model | Size | Speed | Quality |
|---|---|---|---|
tiny |
75 MB | fastest | lowest |
base |
145 MB | very fast | low |
small |
466 MB | fast | decent |
medium |
1.5 GB | moderate | good |
large-v3 |
3 GB | slower | best |
large-v3-turbo |
1.6 GB | fast | near-best ✓ recommended |
distil-large-v3 |
1.5 GB | fast | best (English only) |
- Graceful (
qin TUI or Ctrl+C once): stops recording, finalizes the video file, then transcribes. Checktwitch-indexer.logfor progress. - Force quit (
Qin TUI or Ctrl+C twice): exits immediately, in-progress recording is lost.