This guide outlines different validation strategies for testing S3 implementations, from quick smoke tests to comprehensive production validation.
Purpose: Rapid verification that S3 endpoint is functional
# Run only basic tests
make test TEST="001 002 003"What it validates:
- ✅ Bucket creation/deletion
- ✅ Object upload/download
- ✅ Basic CRUD operations
When to use:
- After configuration changes
- Quick sanity check
- CI/CD smoke tests
- Pre-deployment verification
Purpose: Validate critical production features
# Run critical tests only
python scripts/production-validation.py --config s3_config.yaml --quickWhat it validates:
- ✅ Data integrity (MD5/ETag)
- ✅ Error handling & retry logic
- ✅ Network timeout recovery
- ✅ Concurrent upload integrity
When to use:
- Pre-production deployment
- After infrastructure changes
- Daily automated testing
- Release candidate validation
Purpose: Test specific S3 features in depth
# Test specific feature groups
make test GROUP=multipart # Multipart uploads
make test GROUP=versioning # Object versioning
make test GROUP=performance # Performance benchmarksFeature-specific strategies:
make test TEST="100 101 102"- Tests files >5MB
- Parallel part uploads
- Abort/resume capabilities
make test TEST=200- Enable/disable versioning
- Version retrieval
- Version deletion
make test TEST="600 601"- Sequential I/O benchmarks
- Concurrent operations
- Throughput measurement
- Latency percentiles
Purpose: Comprehensive production readiness assessment
# Full validation suite
python scripts/production-validation.py --config s3_config.yamlWhat it validates:
- ✅ All critical tests (100% pass required)
- ✅ All feature tests (80-90% pass required)
- ✅ Performance benchmarks
- ✅ Stress testing capabilities
- ✅ API compliance
When to use:
- Production go-live decision
- Vendor evaluation
- Quarterly validation
- Major version upgrades
When migrating between S3 providers:
# 1. Baseline source system
cp .config .config.source
python scripts/production-validation.py --config s3_config.yaml \
--output-dir results/source-baseline
# 2. Test target system
cp .config.target .config
python scripts/production-validation.py --config s3_config.yaml \
--output-dir results/target-validation
# 3. Compare results
diff results/source-baseline/validation-report.json \
results/target-validation/validation-report.jsonDuring development iterations:
# Fast feedback loop
while developing; do
make test TEST=004 # Test specific feature
sleep 2
done
# Regression check before commit
make test GROUP=basicEstablish performance baselines:
# Run performance tests multiple times
for i in {1..5}; do
python scripts/test-runner.py --config s3_config.yaml \
--test 600 601 \
--output-dir perf-run-$i
done
# Aggregate results for baseline
python scripts/aggregate-results.py perf-run-*Validate under load:
# Gradual load increase
for concurrent in 10 20 50 100; do
echo "Testing with $concurrent concurrent operations"
CONCURRENT_OPS=$concurrent make test TEST=601
done| Scenario | Strategy | Time | Coverage |
|---|---|---|---|
| PR Validation | Quick Smoke | 2-5 min | Basic |
| Daily Build | Critical Path | 5-10 min | Critical |
| Release Testing | Feature Validation | 15-30 min | Features |
| Production Deploy | Full Validation | 30-60 min | Complete |
| Vendor Evaluation | Full + Custom | 2-4 hours | Extensive |
| Disaster Recovery | Critical Path | 5-10 min | Essential |
# custom-suite.yaml
name: "E-commerce Platform Tests"
description: "Tests for e-commerce S3 usage patterns"
tests:
critical:
- 004 # Data integrity
- 006 # Concurrent uploads
features:
- 100 # Large product images
- 200 # Version control for catalogs
performance:
- 600 # CDN cache patternsRun custom suite:
python scripts/run-suite.py --suite custom-suite.yaml# Focus on encryption and audit
make test TEST="400 401 402" # Encryption tests
make test TEST="900 901" # Audit logging# Focus on data integrity and versioning
make test TEST="004 005 006" # Data integrity
make test TEST="200 201" # Versioning
make test TEST="900" # Compliance# Focus on performance and multipart
make test TEST="100 101 102" # Multipart uploads
make test TEST="600 601" # Performance
make test TEST="700" # Stress testing# Generate executive report
python scripts/production-validation.py --config s3_config.yaml \
--report-format executive \
--output-dir reports/executive# Detailed technical analysis
python scripts/production-validation.py --config s3_config.yaml \
--verbose \
--include-logs \
--output-dir reports/technical# Compliance-focused validation
python scripts/production-validation.py --config s3_config.yaml \
--compliance-mode \
--output-dir reports/compliance# .github/workflows/daily-validation.yml
name: Daily S3 Validation
on:
schedule:
- cron: '0 2 * * *' # 2 AM daily
jobs:
validate:
runs-on: ubuntu-latest
strategy:
matrix:
provider: [minio, localstack, aws-s3]
steps:
- uses: actions/checkout@v2
- name: Run validation
run: |
make defconfig-${{ matrix.provider }}
python scripts/production-validation.py \
--config s3_config.yaml \
--quickStart with quick tests, escalate if issues found:
#!/bin/bash
# progressive-validation.sh
# Level 1: Quick check
if ! make test TEST="001 002 003"; then
echo "Basic tests failed, running diagnostics..."
# Level 2: Critical path
python scripts/production-validation.py --quick
if [ $? -ne 0 ]; then
# Level 3: Full validation
echo "Critical issues found, running full validation..."
python scripts/production-validation.py --config s3_config.yaml
fi
fi- Use unique bucket names with timestamps
- Clean up after tests
- Preserve failing test data for debugging
# Run independent test groups in parallel
make test GROUP=basic &
make test GROUP=multipart &
make test GROUP=performance &
wait# Track results over time
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
python scripts/production-validation.py \
--config s3_config.yaml \
--output-dir results/$TIMESTAMP
# Generate trend report
python scripts/trend-analysis.py results/*/validation-report.json# Retry failed tests with increased timeout
for test in $(grep FAILED results/latest/report.txt | cut -d: -f1); do
echo "Retrying test $test with extended timeout..."
timeout 300 make test TEST=$test
done# Fast iteration with local MinIO
docker run -d -p 9000:9000 minio/minio server /data
make defconfig-docker-demo
make test TEST=004 # Test your specific feature# Comprehensive but not exhaustive
python scripts/production-validation.py \
--config staging.yaml \
--quick \
--skip-stress-tests# Full validation with monitoring
python scripts/production-validation.py \
--config prod.yaml \
--monitor \
--alert-on-failure \
--output-dir /monitoring/s3-validation| Level | Pass Rate | Max Duration | Critical Tests |
|---|---|---|---|
| Quick Smoke | 100% | 5 min | All must pass |
| Critical Path | 100% | 10 min | All must pass |
| Feature | 90% | 30 min | 80% must pass |
| Full Production | 95% | 60 min | 100% critical |
- Validation Duration: Should remain consistent
- Pass Rate Trend: Should improve over time
- Critical Failures: Must be zero for production
- Performance Regression: <10% degradation allowed
# Show only failures
grep -E "FAILED|ERROR" validation-report.txt
# Get failure details
python scripts/test-runner.py --config s3_config.yaml \
--test <failed-test-id> \
--verbose \
--debug| Issue | Quick Check | Solution |
|---|---|---|
| Timeout failures | make test TEST=011 |
Increase timeouts |
| Version errors | make test TEST=200 |
Check S3 compatibility |
| Performance issues | make test TEST=600 |
Check network/resources |
| Concurrent failures | make test TEST=601 |
Reduce concurrency |