Orion MCP is a Model Context Protocol (MCP) server for performance regression analysis powered by the cloud-bulldozer/orion library.
- Regression Detection – Automatically detects performance regressions in OpenShift & Kubernetes clusters.
- Interactive MCP API – Exposes a set of composable tools & resources that can be consumed via HTTP or by other MCP agents.
- Visual Reporting – Generates publication-ready plots (PNG/JPEG) for trends, multi-version comparisons and metric correlations.
- Container-first – Ships with a lightweight OCI image and an example OpenShift deployment manifest.
- Python 3.11 or newer
- An OpenSearch (or Elasticsearch ≥7.17) endpoint with Orion-indexed benchmark results
- Podman or Docker (optional – for containerised execution)
# Clone repository
$ git clone https://github.com/YOUR_ORG/orion-mcp.git && cd orion-mcp
# Create & activate a virtual environment
$ python3.11 -m venv .venv
$ source .venv/bin/activate
# Install Python dependencies
$ pip install -r requirements.txtSet the data-source endpoint and launch the server locally:
export ES_SERVER="https://opensearch.example.com:9200"
python orion_mcp.py # listens on 0.0.0.0:3030 by default| Tool | Description | Default Arguments |
|---|---|---|
get_data_source |
Returns the configured OpenSearch URL | none |
get_orion_configs |
Lists available Orion configuration files | none |
get_orion_metrics |
Lists metrics grouped by Orion config | config_name="small-scale-udn-l3.yaml", version="4.20" |
get_orion_metrics_with_meta |
Lists metrics plus metadata for Orion config | config_name="small-scale-udn-l3.yaml", version="4.19" |
get_orion_performance_data |
Returns raw performance values for config/metric/version | config_name="small-scale-udn-l3.yaml", metric="podReadyLatency_P99", version="4.19", lookback="15" |
openshift_report_on |
Generates a trend line for one or more OCP versions | versions="4.19", lookback="15", metric="podReadyLatency_P99", config_name="small-scale-udn-l3.yaml" |
openshift_report_on_pr |
NEW Analyzes performance impact of a specific Pull Request | version="4.20", lookback="15", organization="openshift", repository="ovn-kubernetes", pull_request="2841" |
has_openshift_regressed |
Scans all configs for changepoints | version="4.19", lookback="15" |
metrics_correlation |
Correlates two metrics & returns a scatter plot | metric1="podReadyLatency_P99", metric2="ovnCPU_avg", config_name="trt-external-payload-cluster-density.yaml", version="4.19", lookback="15" |
The openshift_report_on_pr tool provides automated performance regression detection for GitHub Pull Requests. This feature compares the performance metrics of a specific PR against the periodic baseline performance to identify potential regressions.
- Baseline Collection: Gathers periodic performance data for the specified OpenShift version over the lookback period
- PR Analysis: Runs performance tests specifically for the target Pull Request
- Comparison: Compares PR performance against the periodic baseline using a 10% threshold
- Multi-Config Testing: Tests across multiple Orion configurations for comprehensive coverage
The PR analysis runs against these key performance test configurations:
trt-external-payload-cluster-density.yaml- Cluster density and pod scaling teststrt-external-payload-node-density.yaml- Node-level performance and resource utilizationtrt-external-payload-node-density-cni.yaml- CNI-specific networking performancetrt-external-payload-crd-scale.yaml- Custom Resource Definition scaling tests
- periodic_avg: Baseline performance metrics averaged over the lookback period
- pull: Performance metrics from the specific PR's test runs
- Regression Detection: Compare values using the 10% threshold:
(pull_value - periodic_avg) / periodic_avg > 0.10indicates a potential regression- Values within ±10% are considered normal variance
The response format is optimized for AI analysis. The LLM can:
- Automatically detect regressions by comparing periodic_avg vs pull metrics
- Apply the 10% threshold to determine significance
- Generate human-readable reports highlighting concerning changes
- Provide actionable insights about which metrics regressed and by how much
For comprehensive documentation:
- 📚 Complete Documentation - Full documentation index
- 🚀 Quick Start Guide - Get started in minutes
- 🎯 Features Guide - Complete features documentation including PR analysis
- 🔧 API Reference - Complete API documentation
Analyze this PR performance data and identify any regressions using a 10% threshold:
[paste the JSON response]
For each metric that shows >10% degradation, explain:
1. The metric name and what it measures
2. The baseline vs PR values
3. The percentage change
4. Potential impact on users
podman build -t quay.io/YOUR_ORG/orion-mcp:latest .To deploy to an OpenShift cluster, specify the ES_SERVER in kustomize/base/.env, e.g.:
ES_SERVER=https://USER:PASSWORD@SERVER:443To deploy the application:
# Expose your quay credentials to fetch the container image
export QUAY_CRED='<base64 encoded pull secret>'
# Build and apply the manifests
kustomize build --load-restrictor=LoadRestrictionsNone ./kustomize/base | envsubst | oc apply -f -To verify any changes to manifests, you can render them locally, e.g.:
kustomize build ./kustomize/base | envsubst > manifests.yaml To access the service externally, expose it using an OpenShift Route and point your MCP client to http://<host>:3030.
# Run linters & tests
flake8
pytest
# Auto-format with black & isort
black . && isort .Pull requests are very welcome! Please ensure you have read and adhere to the Code of Conduct.
- Fork the repository
- Create a new branch for your feature or bugfix
- Make your changes and add tests if applicable
- Submit a pull request with a clear description of your changes
Orion-MCP is distributed under the Apache 2.0 License. See the LICENSE file for full text.