All notable changes to SuperOptiX will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- OpenAI Agents SDK Support - Full adoption of the new OpenAI Agents SDK as a first-class framework target alongside DSPy, Claude SDK, Pydantic AI, CrewAI, Google ADK, and DeepAgents.
- New CLI command
super agent pull --framework openai-agentsto pull agents compatible with the OpenAI Agents SDK. - New CLI command
super agent compile <agent> --framework openai-agentsto compile SuperSpecs into OpenAI Agents SDK pipelines. - SuperSpec schema extensions for OpenAI Agents SDK configuration (
spec.openai_agents). - New validators for OpenAI Agents SDK-specific settings including tracing, tracingoff, maxTurns, and handoffs.
- Jinja2 pipeline templates for OpenAI Agents SDK:
openai_pipeline_minimal.py.jinja2andopenai_pipeline_optimized.py.jinja2.
- OpenAI SDK integration docs now clarify the distinction between the legacy OpenAI SDK and the new OpenAI Agents SDK.
- Feature matrix updated to show OpenAI Agents SDK as a supported framework target.
- Added ADR (Architecture Decision Record) for OpenAI Harness Sandbox Adoption.
- Added OpenAI Agents SDK adoption summary guide with migration details.
- Arize Phoenix observability integration for SuperOptiX framework runtimes via the shared Phoenix helper path.
- Pullable DSPy demo agent
arize-phoenix-demofor trace-first Phoenix demos withsuper agent pull arize-phoenix-demo.
- Demo and examples navigation now includes a dedicated Arize Phoenix example page.
- Observability docs now include a user-facing Phoenix walkthrough covering both
super agent pull arize-phoenix-demoand adaptingsuper agent pull developerwithspec.phoenix.
- Phoenix session span setup now tolerates tracers that do not expose OpenInference-style
set_input, avoiding runtime failure during traced DSPy runs.
- Added a detailed Arize Phoenix demo guide with setup, pull, compile, run, and troubleshooting steps.
- Expanded observability documentation to show how to configure
spec.phoenixin a pulled agent playbook.
- DSPy Ollama model normalization now strips
ollama:andollama/prefixes before constructing LiteLLM-compatibleollama_chat/...model names. - DSPy local Qwen playbooks no longer fail immediately with invalid model names when using Ollama-backed runtimes.
- TurboAgents-backed Chroma retrieval support in the shared RAG path and GEPA vector-store layer.
- TurboAgents integration docs and demo coverage for Chroma, LanceDB, and SurrealDB.
- Source-checkout guidance for validating TurboAgents-backed RAG flows across SuperOptiX frameworks.
- SuperOptiX demo playbooks now prefer Qwen local models for the current TurboAgents validation path.
- SurrealDB seed tooling now writes TurboAgents-compatible payloads and matches runtime embedding truncation.
- TurboAgents docs now describe SuperOptiX as the first full reference integration.
- TurboAgents SurrealDB auth is preserved in the shared RAG setup path.
- DSPy runner and minimal pipeline template now pass
api_basethrough to DSPy LM setup. - Dependency metadata now excludes compromised LiteLLM releases
1.82.7and1.82.8.
- SuperOptiX now blocks LiteLLM
1.82.7and1.82.8in dependency resolution after the March 2026 PyPI compromise advisory.
- Packaged Google ADK A2A demo assets and pull alias support for
a2a-adk-demo. - Packaged Google ADK A2A server demo module for installed-package usage.
- Updated A2A docs and README to reflect DSPy, Pydantic AI, and Google ADK demo coverage.
- Prepared the follow-up package release so installed users can pull the Google ADK A2A demo without a source checkout.
- Core A2A v1 support as a native SuperOptiX protocol capability.
super agent serve <name> --protocol a2afor exposing compiled agents over A2A.- A2A client and server bridges with Agent Card generation and task-oriented interoperability.
- Packaged A2A demos for DSPy and Pydantic AI, plus pullable demo agents.
- Replaced the old Agenspy-oriented protocol path with a neutral runtime and protocol architecture.
- Updated the website and docs to present A2A as a first-class top-level SuperOptiX capability.
- Added dedicated A2A v1 introduction, guide, demo guide, and integration checklist documentation.
- Added website navigation and landing page coverage for A2A support.
- SurrealDB GraphRAG demo playbooks for all supported frameworks (DSPy, OpenAI, Claude SDK, Microsoft, PydanticAI, CrewAI, Google ADK, DeepAgents).
- Lean SurrealDB parity checks in test matrix for both RAG and GraphRAG playbooks.
- SurrealDB docs rewritten for beginner-first setup and troubleshooting.
- SurrealDB docs now use source-independent seeding commands via
python -m superoptix.agents.demo.setup_surrealdb_seed. - SurrealDB examples index updated to reflect full feature coverage.
- DSPy SurrealDB runtime compile/run parity in demo flow.
- SurrealDB GraphRAG feature detection compatibility (RELATE probe parsing).
- SurrealDB graph traversal query compatibility for parser variants that reject wildcard
->*syntax.
- Added explicit SurrealDB feature coverage map with tags and runnable commands (vector, hybrid, GraphRAG, multi, temporal, server embeddings, live utility, MCP tool, capability gating).
- Added beginner-friendly runbooks for SurrealDB local and Docker workflows with expected outputs and error-to-fix steps.
- SurrealDB RAG Integration: Added native SurrealDB retriever support in runner-managed RAG flows
- Added
surrealdbretriever setup/query/document-ingest paths inRAGMixin - Added SurrealDB vector store adapter for GEPA RAG (
surrealdb_store.py) - Added SuperSpec validation/schema support for
surrealdbretriever type
- Added
- Framework Demo Agents: Added new SurrealDB RAG demo playbooks for:
- DSPy embedded mode:
rag_surrealdb_demo - DSPy Docker mode:
rag_surrealdb_docker_demo - PydanticAI:
rag_surrealdb_pydanticai_demo - CrewAI:
rag_surrealdb_crewai_demo - Google ADK:
rag_surrealdb_adk_demo
- DSPy embedded mode:
- Framework Pipeline RAG Context Injection: Updated minimal pipeline templates to inject retrieved SurrealDB context for:
pydantic_ai_pipeline_minimal.py.jinja2crewai_pipeline_minimal.py.jinja2google_adk_pipeline_minimal.py.jinja2
- SurrealDB URL Handling: Improved URL normalization for SurrealDB server endpoints to reduce transport/path mismatch issues.
- Added detailed SurrealDB documentation pages:
- Embedded demo guide
- Docker demo guide
- Framework guide for DSPy, PydanticAI, CrewAI, and Google ADK
- Updated docs navigation and examples index with SurrealDB sections and quick-start workflows.
- Added SurrealDB vector store tests in
tests/test_surrealdb_vector_store.py.
- Agent Naming Consistency: Fixed inconsistent agent IDs (hyphens vs underscores) across all GEPA agents
- Standardized all agent IDs to use underscores to match filename convention
- Fixed:
advanced_math_gepa,enterprise_extractor_gepa,medical_assistant_gepa,contract_analyzer_gepa,privacy_delegate_gepa,data_science_gepa,security_analyzer_gepa,gepa_demo
- Genies Tier Optimization Bug: Fixed
input_fieldvariable scope error in DSPy Genies pipeline template- Resolved "name 'input_field' is not defined" error during optimization
- Added proper variable scoping in train() and evaluate() methods
- Genies tier agents now optimize successfully with BootstrapFewShot, SIMBA, and BetterTogether optimizers
- Comprehensive GEPA Documentation: Added detailed documentation for all 8 GEPA agents across multiple domains
- Mathematics:
advanced_math_gepa,data_science_gepa - Healthcare:
medical_assistant_gepa - Legal:
contract_analyzer_gepa - Enterprise:
enterprise_extractor_gepa - Security:
security_analyzer_gepa,privacy_delegate_gepa - Demo:
gepa_demo
- Mathematics:
- DSPy Optimizers Quick Start Guide: Added comprehensive quick start commands for all 8 DSPy optimizers
- Complete workflows (pull, compile, optimize, test) for each optimizer
- Domain-specific examples and use cases
- Performance comparisons and best practices
- GEPA Limitations Documentation: Added critical guidance about GEPA compatibility
- Clear warning that GEPA doesn't work with tool-calling agents (Genies tier+)
- Detailed explanation of why (complex output formats, tool call parsing issues)
- Alternative optimizer recommendations for each tier
- Agent tier compatibility table
- Ready-to-Run Commands: All documentation now includes copy-paste commands with proper timeouts
- Agent Discovery: Complete tables of all available agents organized by domain and optimizer
- Practical Examples: Real-world goals and use cases for every agent type
- ๐ Apple Silicon GPT-OSS Support: MLX-LM v0.26.3 now provides native Apple Silicon support for GPT-OSS models
- No More Mixed Precision Issues: MLX-LM handles MXFP4 quantization properly on Apple Silicon
- Native Performance: GPT-OSS models now run natively without CPU fallback
- Multiple Backend Options: Users can choose between MLX (native) and Ollama (performance)
- ๐ GPT-OSS Model Support: Added support for OpenAI's latest open-source models (GPT-OSS-20B and GPT-OSS-120B)
- Apache 2.0 license for commercial use
- Native MXFP4 quantization for efficient inference
- Resources: GPT-OSS-120B, GPT-OSS-20B, Ollama Library
- MLX Model Evaluation: Added
super model mlx evaluatecommand for benchmarking MLX models using LM-Eval integration - MLX Model Fusion: Added
super model mlx fusecommand for fusing finetuned adapters into base models - Backend-Specific Commands: Reorganized model commands with
super model mlx,super model vllm,super model sglangsubcommands - Advanced MLX Features: Support for evaluation tasks (mmlu, arc, hellaswag, etc.), fusion with dequantization, GGUF export, and HuggingFace upload
- vLLM High-Performance Inference: Added
super model vllm serve,super model vllm generate,super model vllm benchmark, andsuper model vllm quantizecommands for production-grade inference - vLLM Advanced Features: Support for multi-GPU serving, streaming generation, performance benchmarking, and model quantization (AWQ, GPTQ, SqueezeLLM)
- vLLM Optional Dependency: Added vLLM as optional dependency with
pip install superoptix[vllm]for Linux systems with NVIDIA GPUs - SGLang Streaming & Optimization: Added
super model sglang serve,super model sglang generate,super model sglang optimize, andsuper model sglang benchmarkcommands for streaming and optimization - SGLang Advanced Features: Support for streaming generation, performance optimization (O0-O3), advanced batching, and real-time inference
- SGLang Optional Dependency: Added SGLang as optional dependency with
pip install superoptix[sglang]for Linux systems with NVIDIA GPUs - MLX Experimental Features: Added experimental
super model convertandsuper model quantizecommands for MLX model conversion and quantization (seeMLX_EXPERIMENTAL_FEATURES.md) - Auto-installation: Enhanced
super model runwith automatic model installation and backend detection
- MLX Dependencies: Updated to MLX-LM v0.26.3 for native GPT-OSS support on Apple Silicon
- Model Management: Enhanced MLX backend with better error handling and format validation
- CLI Improvements: Simplified UX by removing
--forceflags for cleaner commands
- Apple Silicon Compatibility: Resolved mixed precision issues that prevented GPT-OSS models from running on Apple Silicon
- HuggingFace Backend Limitations: Documented that HuggingFace backend still has mixed precision issues on Apple Silicon
- Apple Silicon Guide: Updated documentation to reflect GPT-OSS support via MLX-LM and Ollama backends
- Performance Comparison: Added performance metrics comparing MLX-LM vs Ollama vs HuggingFace backends
- Simplified Model Installation: Completely redesigned model installation system for MLX and HuggingFace backends
- Detailed Progress Display: Added file-by-file download progress for large models with safetensors/bin files
- Improved Model Detection: Fixed model detection logic to properly identify installed models vs metadata-only downloads
- MLX Backend: Simplified installation using direct
snapshot_downloadwith single-threaded progress display - HuggingFace Backend: Streamlined installation with detailed file-by-file progress for large models
- CLI Integration: Enhanced
super model installcommand with proper model detection and progress display - Model Detection Logic: Fixed detection to require actual model files (
.safetensors,.bin) not just config files
- Model Installation Issues: Resolved problems with large model downloads getting stuck at "Fetching files: 0%"
- False Positive Detection: Fixed issue where models with only metadata were incorrectly shown as "installed"
- Progress Display: Fixed missing detailed progress for individual model file downloads
- Installation Guide: Updated
SIMPLE_MODEL_INSTALLATION.mdwith new simplified approach - Testing Scripts: Added
test_simple_install.pyfor validating model installation functionality
- Installation Approach: Removed complex validation and progress tracking in favor of simple, reliable downloads
- Progress Display: Switched from custom progress bars to standard HuggingFace Hub progress display
- Error Handling: Simplified error handling with clear, actionable error messages
- Single-Threaded Downloads: Uses
max_workers=1to force detailed progress display for large models - Direct Download: Uses
snapshot_downloadwithout complex parameters for reliability - Proper Detection: Checks for actual model files in snapshots directory, not just metadata
This is the first release of SuperOptiX - "The Kubernetes of Agentic AI"!
- DSPy-Native Architecture: Built on DSPy 3.0 for self-improving agent programs
- Agent Playbook System: Declarative agent configuration with YAML
- Multi-Agent Orchestration: Sophisticated agent coordination and workflow management
- Memory Systems: Long-term, short-term, and episodic memory backends
- Evaluation Framework: Built-in testing and quality metrics for agents
super init: Initialize new agentic projects with full scaffoldingsuper agent create: Generate agent templates and configurationssuper compile: Compile agents with DSPy optimizationsuper orchestra: Multi-agent orchestration and deploymentsuper run: Execute individual agents and agent workflows
- Business & Consulting: Strategy consultants, business analysts, change managers
- Software Development: Developers, QA engineers, DevOps, architects
- Healthcare: Medical assistants, health educators, mental health coaches
- Education: Tutors, instructors, study coaches across multiple subjects
- Finance: Financial advisors, budget analysts, investment researchers
- Marketing: Content creators, SEO specialists, campaign strategists
- Legal: Contract analyzers, compliance checkers, legal researchers
- And many more! (20+ industry categories)
- Redis Backend: Scalable memory storage for production deployments
- Vector Memory: Semantic memory search and retrieval
- Context Management: Intelligent context window optimization
- Memory Persistence: Long-term agent memory across sessions
- Real-time Monitoring: Agent performance and behavior tracking
- Token Usage Analytics: Cost and performance optimization
- Debug Dashboard: Visual debugging tools for agent development
- Comprehensive Logging: Detailed execution traces and metrics
- Agent BDD: Behavior-driven development for agents
- Automated Evaluation: Quality metrics and regression testing
- Performance Benchmarks: Agent performance measurement tools
- Safety Checks: Built-in guardrails and safety validation
- DSPy 3.0: Full integration with latest DSPy features
- MLFlow: Experiment tracking and model management
- FastAPI: Production-ready API deployment
- Streamlit: Rapid UI development for agent interfaces
- Comprehensive Guides: Step-by-step tutorials and documentation
- Code Examples: Real-world agent implementations
- Best Practices: Industry-standard development patterns
- API Reference: Complete API documentation
- ๐ฅ Evaluation-First Development: Every agent is testable and measurable
- ๐ Auto-Optimization: DSPy-powered prompt and pipeline optimization
- ๐ผ Orchestration: Kubernetes-style multi-agent coordination
- ๐ก๏ธ Production-Ready: Enterprise-grade reliability and monitoring
- ๐ง Modular Design: Swap components, models, and tools at runtime
- ๐ Rich Analytics: Comprehensive performance and quality metrics
- 200+ Agent Templates: Pre-built agents for every industry
- DSPy 3.0 Integration: Leverage the latest in self-improving programs
- Enterprise Security: Built-in security best practices and compliance
- Cloud-Native: Designed for modern cloud deployments
- Developer Experience: Intuitive CLI and comprehensive tooling
pip install superoptix# Create your first agentic system
super init my_agent_system
cd my_agent_system
# Create and run an agent
super agent create customer_service --template=support
super run customer_service "How can I help you today?"- GitHub: https://github.com/SuperagenticAI/superoptix
- Documentation: https://github.com/SuperagenticAI/superoptix/docs
- Discussions: https://github.com/SuperagenticAI/superoptix/discussions
For each release, we document:
New features and capabilities
Changes to existing functionality
Features that will be removed in future versions
Bug fixes and issue resolutions
Security-related improvements and fixes
Performance improvements and optimizations
- Advanced Reasoning: Multi-step reasoning capabilities
- Tool Integration: Enhanced tool calling and API integration
- Visual Agents: Image and video processing capabilities
- Agent Marketplace: Community-driven agent sharing platform
- Kubernetes Deployment: Native K8s orchestration
- Enterprise SSO: Advanced authentication and authorization
- Audit Logging: Comprehensive audit trails
- SLA Monitoring: Service level agreement tracking
- Self-Improving Agents: Agents that evolve based on usage
- Federated Learning: Cross-agent knowledge sharing
- Custom Models: Support for fine-tuned and custom models
- Agent Analytics: Advanced analytics and insights
๐ฏ Stay Updated: Watch our repository and join our community to stay informed about the latest releases and features!
๐ค Contribute: Help us build the future of agentic AI by contributing to SuperOptiX!