Skip to content

Commit 0ab1417

Browse files
SL-Marclaude
andcommitted
v2.0.0: Ollama-only local LLM inference — strip all cloud providers
Breaking changes: - Remove Anthropic, OpenAI, Mistral (cloud), DeepSeek providers - Rewrite LLMFactory with task-based model routing (coding→qwen2.5-coder:32b, reasoning→mistral) - Simplify ModelConfig: remove duplicate code_provider bug, add code_model/reasoning_model/ollama_timeout - Make load_api_key()/save_api_key() no-ops (no cloud API keys needed) - Remove openai, anthropic, mistralai from dependencies LLM layer: - Enhance OllamaProvider: check_health(), list_models(), configurable timeout, /v1 suffix stripping - Rewrite LLMHandler as sync wrapper over OllamaProvider via asyncio.run() - Same public API preserved (generate_summary, generate_qc_code, refine_code, fix_runtime_error, chat) Updated consumers: cli.py, coordinator_agent.py, evolver/, autonomous/pipeline.py Updated tests: 317/325 pass (8 pre-existing failures in mcp/tools/evolver) Updated docs: README, CHANGELOG, CONTRIBUTING, PRODUCTION_SETUP Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 7c38101 commit 0ab1417

22 files changed

Lines changed: 748 additions & 1490 deletions

CHANGELOG.md

Lines changed: 26 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -7,52 +7,39 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
---
99

10-
## [Unreleased] - v2.0 (develop branch)
10+
## [2.0.0] - 2026-02-09
11+
12+
### Breaking Changes
13+
- **Cloud LLM providers removed** — Anthropic, OpenAI, Mistral (cloud), and DeepSeek providers have been deleted. QuantCoder now runs exclusively on local models via Ollama.
14+
- **No API keys required**`load_api_key()` and `save_api_key()` are now no-ops. The CLI no longer prompts for API keys on startup.
15+
- **ModelConfig simplified** — Removed `coordinator_provider`, `code_provider` (duplicate), `risk_provider`, `summary_provider`, `summary_model`, `ollama_model` fields. New fields: `code_model`, `reasoning_model`, `ollama_timeout`.
16+
- **LLMFactory API changed** — Now uses task-based routing: `LLMFactory.create(task="coding")` instead of `LLMFactory.create("anthropic", api_key="...")`.
17+
- **Dependencies removed**`openai`, `anthropic`, `mistralai` packages no longer required.
1118

1219
### Added
20+
- **Ollama-only local inference** — All LLM calls route through Ollama
21+
- `qwen2.5-coder:32b` for code generation, refinement, error fixing
22+
- `mistral` for reasoning, summarization, chat
23+
- **Task-based model routing**`LLMFactory.create(task=...)` automatically selects the right model
24+
- **OllamaProvider enhancements**`check_health()`, `list_models()`, configurable timeout (default 600s)
25+
- **Backwards-compatible config loading** — Old config files with unknown fields are handled gracefully; `/v1` suffix stripped from `ollama_base_url`
26+
27+
### Changed (from pre-release)
1328
- **Multi-Agent Architecture**: Specialized agents for algorithm generation
14-
- `CoordinatorAgent` - Orchestrates multi-agent workflow
15-
- `UniverseAgent` - Generates stock selection logic (Universe.py)
16-
- `AlphaAgent` - Generates trading signals (Alpha.py)
17-
- `RiskAgent` - Generates risk management (Risk.py)
18-
- `StrategyAgent` - Integrates components (Main.py)
19-
- **Autonomous Pipeline**: Self-improving strategy generation
20-
- `AutonomousPipeline` - Continuous generation loop
21-
- `LearningDatabase` - SQLite storage for patterns
22-
- `ErrorLearner` - Analyzes and learns from errors
23-
- `PerformanceLearner` - Tracks successful patterns
24-
- `PromptRefiner` - Dynamically improves prompts
25-
- **Library Builder**: Batch strategy generation system
26-
- 13+ strategy categories (momentum, mean reversion, factor, etc.)
27-
- Checkpointing for resumable builds
28-
- Coverage tracking and reporting
29-
- **Multi-LLM Support**: Provider abstraction layer
30-
- OpenAI (GPT-4, GPT-4o)
31-
- Anthropic (Claude 3, 3.5)
32-
- Mistral (Mistral Large, Codestral)
33-
- DeepSeek
34-
- **Tool System**: Pluggable tool architecture (Mistral Vibe pattern)
35-
- `SearchArticlesTool`, `DownloadArticleTool`
36-
- `SummarizeArticleTool`, `GenerateCodeTool`
37-
- `ValidateCodeTool`, `ReadFileTool`, `WriteFileTool`
38-
- **Rich Terminal UI**: Modern CLI experience
39-
- Interactive REPL with command history
40-
- Syntax highlighting for generated code
41-
- Progress indicators and panels
42-
- Markdown rendering
43-
- **Parallel Execution**: AsyncIO + ThreadPool for concurrent agent execution
44-
- **MCP Integration**: QuantConnect Model Context Protocol for validation
45-
- **Configuration System**: TOML-based configuration with dataclasses
46-
47-
### Changed
29+
- **Autonomous Pipeline**: Self-improving strategy generation with learning database
30+
- **Library Builder**: Batch strategy generation across 13+ categories
31+
- **AlphaEvolve Evolution**: LLM-driven structural variation of algorithms
32+
- **Tool System**: Pluggable architecture (search, download, summarize, generate, validate, backtest)
33+
- **Rich Terminal UI**: Modern CLI with syntax highlighting, panels, progress indicators
34+
- **MCP Integration**: QuantConnect Model Context Protocol for validation and backtesting
35+
- **Configuration System**: TOML-based with dataclasses
4836
- Package renamed from `quantcli` to `quantcoder`
49-
- Complete architectural rewrite
50-
- CLI framework enhanced with multiple execution modes
51-
- Removed Tkinter GUI in favor of Rich terminal interface
5237

5338
### Removed
54-
- Tkinter GUI (replaced by Rich terminal)
39+
- Tkinter GUI (replaced by Rich terminal in pre-release)
5540
- Legacy OpenAI SDK v0.28 support
41+
- All cloud LLM providers (Anthropic, OpenAI, Mistral cloud, DeepSeek)
42+
- Optional dependency groups `[openai]`, `[anthropic]`, `[mistral]`, `[all-llm]`
5643

5744
---
5845

CONTRIBUTING.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ By participating in this project, you are expected to maintain a respectful and
3232

3333
- Python 3.10 or higher
3434
- Git
35+
- [Ollama](https://ollama.ai) running locally
3536
- A virtual environment tool (venv, conda, etc.)
3637

3738
### Installation
@@ -51,6 +52,10 @@ pip install -e ".[dev]"
5152
# Download required spacy model
5253
python -m spacy download en_core_web_sm
5354

55+
# Pull required Ollama models
56+
ollama pull qwen2.5-coder:32b
57+
ollama pull mistral
58+
5459
# Verify installation
5560
quantcoder --help
5661
```

PRODUCTION_SETUP.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -147,19 +147,26 @@ quantcoder-cli/
147147

148148
---
149149

150-
## Release Workflow (Future)
150+
## Release Workflow
151+
152+
### v2.0.0 Release (Ollama-only, local models)
151153

152154
```bash
153-
# When v2.0 is ready:
155+
# Merge develop into main
154156
git checkout main
155157
git merge develop
156-
git tag -a v2.0 -m "v2.0: Multi-agent architecture"
158+
git tag -a v2.0.0 -m "v2.0.0: Ollama-only local LLM inference"
157159
git push origin main --tags
158160

159161
# v1.0 and v1.1 remain accessible via tags
160162
git checkout v1.0 # Access old version anytime
161163
```
162164

165+
### Prerequisites for v2.0.0
166+
- Ollama installed and running
167+
- Models pulled: `ollama pull qwen2.5-coder:32b && ollama pull mistral`
168+
- No cloud API keys required
169+
163170
---
164171

165172
## Checklist

0 commit comments

Comments
 (0)