This project is an advanced command-line interface (CLI) chat application that uses a multi-agent architecture to generate high-quality AI responses. Instead of relying on a single model response, it tasks multiple "Worker" agents to generate draft answers in parallel. A final "Synthesizer" agent then refines these drafts into a single, polished response.
The application features a rich, dynamic terminal UI powered by the rich library, providing real-time status updates on the agent's progress.
- Multi-Agent Architecture: Leverages multiple parallel workers and a synthesizer for more robust and refined answers.
- Rich CLI: A dynamic dashboard shows the real-time status of each worker and the synthesizer, complete with spinners and progress timers.
- Session Management: Save and load your chat history to resume conversations later.
- Runtime Configuration: Adjust settings like the model, reasoning level, and logging on-the-fly without restarting the script.
- Retry Logic: Automatically retries failed API calls to handle transient network issues.
- Detailed Logging: Optionally save a full trace of each turn—including all worker drafts and the final output—to a text file for analysis.
The script follows a simple yet powerful "Mixture of Experts" pattern for each user prompt:
- Dispatch: The user's message and the conversation history are sent to multiple Worker agents simultaneously using
asyncio. - Draft: Each Worker independently processes the request and generates a draft answer.
- Synthesize: The Synthesizer agent receives the original request and all the worker drafts. Its job is to analyze the drafts, merge the best ideas, resolve any conflicts, and produce one superior, final answer.
- Display: The final answer is printed to the console, and the turn is complete.
This approach helps mitigate weaknesses or hallucinations from a single model run and often results in more accurate and comprehensive responses.
- Python 3.8+
- An OpenAI API key
-
Clone or Download: Save the script (
main.py) to your local machine. -
Install Dependencies: The script requires the
openaiandrichlibraries. Install them using pip:pip install openai rich
-
Set Environment Variable: You must set your OpenAI API key as an environment variable.
- macOS/Linux:
export OPENAI_API_KEY='your-api-key-here'
- Windows (Command Prompt):
set OPENAI_API_KEY=your-api-key-here - Windows (PowerShell):
$env:OPENAI_API_KEY="your-api-key-here"
- macOS/Linux:
Run the script from your terminal:
python main.pyYou will be greeted by the orchestrator's prompt. Simply type your message and press Enter.
The application supports several slash commands:
/settings: Open the settings menu to change the model, reasoning level, or toggle file logging./save <name>: Saves the current conversation history to a JSON file in thesessions/directory. Example:/save my_research_chat/load <name>: Loads a previous conversation. Example:/load my_research_chat/list: Lists all saved sessions./exit: Quits the application.
You can customize the script's behavior by editing the global variables at the top of the file:
CURRENT_MODEL: The default model to use (e.g.,"gpt-5").MODEL_CHOICES: A list of models available to choose from in the settings menu.N_WORKERS: The number of parallel workers to use for generating drafts.REASONING_LEVEL: The default reasoning effort for the models.LOG_ALL_TO_FILE: Set toTrueto enable detailed logging by default.