Skip to content

feat: add MiniMax as LLM provider (M2.7 default)#1275

Open
octo-patch wants to merge 2 commits intoagent0ai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider (M2.7 default)#1275
octo-patch wants to merge 2 commits intoagent0ai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 17, 2026

Summary

Add MiniMax as a first-class LLM provider for Agent Zero, with M2.7 as the recommended model.

Changes

  • Add MiniMax to conf/model_providers.yaml as an OpenAI-compatible chat provider
  • Add temperature clamping logic in models.py for MiniMax API constraints
  • Add 26 unit tests covering YAML config, ProviderManager, temperature clamping, and API key detection
  • Add 8 integration tests verifying MiniMax-M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models

Why

MiniMax provides an OpenAI-compatible API with competitive models. M2.7 is the latest flagship model with enhanced reasoning and coding capabilities.

Configuration

Users select "MiniMax" as their chat provider and enter their model name (e.g., MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed). The API key is auto-detected from the MINIMAX_API_KEY environment variable.

Testing

  • 26 unit tests passing
  • 8 integration tests passing (with MINIMAX_API_KEY)
  • All MiniMax models verified: M2.7, M2.7-highspeed, M2.5, M2.5-highspeed

PR Bot added 2 commits March 17, 2026 17:53
- Add MiniMax to chat providers in model_providers.yaml (OpenAI-compatible,
  api_base: https://api.minimax.io/v1)
- Add temperature clamping in _adjust_call_args for MiniMax models
  (MiniMax requires temperature in (0.0, 1.0])
- Add 24 unit tests covering YAML config, ProviderManager, temperature
  clamping, and API key detection
- Add 6 integration tests verifying chat completion, streaming, system
  messages, and both M2.5 models
- Update integration tests to use MiniMax-M2.7 as primary test model
- Add MiniMax-M2.7-highspeed integration test
- Keep M2.5 and M2.5-highspeed tests for backward compatibility
- Update temperature clamping tests to cover M2.7 model names
@octo-patch octo-patch changed the title feat: add MiniMax as LLM provider feat: add MiniMax as LLM provider (M2.7 default) Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant