Stack 2.9 is a high-performance AI Agent Framework built around a fine-tuned Qwen2.5-Coder-1.5B model. It is designed for autonomous software engineering, multi-agent orchestration, and complex tool-integrated workflows.
- 57 Production-Ready Tools: From deep code intelligence (Grep, Glob, FileEdit) to agent orchestration (Spawn, TeamCreate, PlanMode).
- Cognitive Enhancements: Integrated Emotional Intelligence, Knowledge Graph RAG, and Advanced NLP pipelines.
- MCP Support: Native integration with the Model Context Protocol for standardized tool and resource access.
- Massive Context: 128K token window for processing entire repositories.
- Fine-tuned for Accuracy: Optimized on Stack Overflow Q&A for high-precision code generation and debugging.
The framework is divided into three core layers:
- The Brain: A LoRA-finetuned Qwen2.5-Coder-1.5B model.
- The Toolbelt: A centralized
ToolRegistrymanaging 57+ tools across 13 categories. - The Enhancements: Modular plugins for sentiment analysis, relationship mapping, and static code auditing.
git clone https://github.com/my-ai-stack/stack-2.9
cd stack-2.9
pip install -r requirements/requirements.txtfrom src.tools import get_registry
from transformers import AutoModelForCausalLM, AutoTokenizer
# 1. Load the brain
model = AutoModelForCausalLM.from_pretrained("my-ai-stack/Stack-2-9-finetuned")
tokenizer = AutoTokenizer.from_pretrained("my-ai-stack/Stack-2-9-finetuned")
# 2. Access the tools
registry = get_registry()
print(f"Available tools: {len(registry.list())}")src/tools/: Implementation of the 57 agent tools.src/enhancements/: Cognitive modules (EI, Knowledge Graph, NLP).src/mcp/: Model Context Protocol client and server.src/voice/: TypeScript client for voice synthesis and cloning.stack/voice/: Python voice server (FastAPI) and integration tools.stack/eval/: Benchmark suites and evaluation results.stack/training/: Fine-tuning pipelines and dataset scripts.
Stack 2.9 includes a voice synthesis and cloning system that allows the agent to communicate via audio.
- Navigate to the voice directory:
cd stack/voice - Install dependencies:
pip install fastapi uvicorn requests
- Start the voice server:
The server will start on
python voice_server.py
http://localhost:8000.
You can run the provided demo script to see the voice integration in action:
python samples/demo_voice.pyThis script simulates a voice command, processes it through the StackAgent, and generates an audio response.
For detailed information, see the Model Card and API Reference.
Built by Walid Sobhi