The kdevops AI workflow provides infrastructure for benchmarking and testing AI/ML systems, with initial support for vector databases.
Just like other kdevops workflows (fstests, blktests), the AI workflow follows the same pattern:
make defconfig-ai-milvus-docker # Configure for AI vector database testing
make bringup # Bring up the test environment
make ai # Run the AI benchmarks
make ai-baseline # Establish baseline results
make ai-results # View results- Milvus - High-performance vector database for AI applications
- Language Models (LLMs)
- Embedding Services
- Training Infrastructure
- Inference Servers
The AI workflow can be configured through make menuconfig:
-
Vector Database Selection
- Milvus (Docker or Native deployment)
- Future: Weaviate, Qdrant, Pinecone
-
Dataset Configuration
- Dataset size (number of vectors)
- Vector dimensions
- Batch sizes
-
Benchmark Parameters
- Query patterns
- Concurrency levels
- Runtime duration
-
Filesystem Testing
- Test on different filesystems (XFS, ext4, btrfs)
- Compare performance across storage configurations
Quick configurations for common use cases:
defconfig-ai-milvus-docker- Docker-based Milvus deploymentdefconfig-ai-milvus-docker-ci- CI-optimized with minimal datasetdefconfig-ai-milvus-native- Native Milvus installation from sourcedefconfig-ai-milvus-multifs- Multi-filesystem performance comparison
Like other kdevops workflows, AI supports baseline/dev comparisons:
# Configure with A/B testing
make menuconfig # Enable CONFIG_KDEVOPS_BASELINE_AND_DEV
make ai-baseline # Run on baseline
make ai-dev # Run on dev
make ai-results # Compare resultsThe AI workflow generates comprehensive performance metrics:
- Throughput (operations/second)
- Latency percentiles (p50, p95, p99)
- Resource utilization
- Performance graphs and trends
Results are stored in the configured results directory (default: /data/ai-results/).
View actual benchmark results from our testing:
- Milvus Performance Demo - Real-world performance across different filesystems
The workflow includes CI-optimized configurations that use:
- Minimal datasets for quick validation
/dev/nullstorage for I/O testing without disk requirements- Environment variable overrides for runtime configuration
Example CI usage:
AI_VECTOR_DATASET_SIZE=1000 AI_BENCHMARK_RUNTIME=30 make defconfig-ai-milvus-docker-ci
make bringup
make aiThe AI workflow follows kdevops patterns:
- Configuration - Kconfig-based configuration system
- Provisioning - Ansible-based infrastructure setup
- Execution - Standardized test execution
- Collection - Automated result collection and analysis
- Reporting - Performance visualization and comparison
For detailed usage of specific components, see: