Audio-driven intelligent animation generation — from dialogue to visual storytelling.
Talk2Scene is an audio-driven intelligent animation tool that automatically parses voice dialogue files, recognizes text content and timestamps, and uses AI to recommend matching character stances (STA), expressions (EXP), actions (ACT), backgrounds (BG), and CG illustrations inserted at the right moments. It produces structured scene event data and composes preview videos showing AI characters performing dynamically across scenes.
Designed for content creators, educators, virtual streamers, and AI enthusiasts — Talk2Scene turns audio into engaging visual narratives for interview videos, AI interactive demos, educational presentations, and more.
Manually composing visual scenes for dialogue-driven content is tedious and error-prone. Talk2Scene automates the entire workflow: feed in audio or a transcript, and the pipeline produces time-synced scene events — ready for browser playback or video export — without touching a single frame by hand.
flowchart LR
A[Audio] --> B[Transcription\nWhisper / OpenAI API]
T[Text JSONL] --> C
B --> C[Scene Generation\nLLM]
C --> D[JSONL Events]
D --> E[Browser Viewer]
D --> F[Static PNG Render]
D --> G[Video Export\nffmpeg]
Scenes are composed from five layer types stacked bottom-up:
flowchart LR
BG --> STA --> ACT --> EXP
A CG illustration, when active, replaces the entire layered scene.
Left: Basic scene (Lab + Stand Front + Neutral) · Center: Cafe scene (Cafe + Stand Front + Thinking) · Right: CG mode (Pandora's Tech)
Each scene is composed by stacking transparent asset layers on a background. Below is one sample from each category:
Important
Requires Python 3.11+, uv, and FFmpeg.
uv syncSet your OpenAI API key:
export OPENAI_API_KEY="your-key"uv run talk2scene --helpGenerate scenes from a pre-transcribed JSONL file:
uv run talk2scene mode=text io.input.text_file=path/to/transcript.jsonlProcess an audio file end-to-end (place audio in input/):
uv run talk2scene mode=batchRender a completed session into video:
uv run talk2scene mode=video session_id=SESSION_IDConsume audio or pre-transcribed text from Redis in real time:
uv run talk2scene mode=streamFull documentation (English & 中文) is available at discover304.top/talk2scene.
- ✉️ Email: hobart.yang@qq.com
- 🐛 Issues: Open an issue on GitHub
Licensed under the Apache License 2.0.








