Secure your AI models before deployment. Static scanner that detects malicious code, backdoors, and security vulnerabilities in ML model files — without ever loading or executing them.
Full Documentation | Usage Examples | Supported Formats
Requires Python 3.10+
pip install modelaudit[all]
# Scan a file or directory
modelaudit model.pkl
modelaudit ./models/
# Export results for CI/CD
modelaudit model.pkl --format json --output results.json$ modelaudit suspicious_model.pkl
Files scanned: 1 | Issues found: 2 critical, 1 warning
1. suspicious_model.pkl (pos 28): [CRITICAL] Malicious code execution attempt
Why: Contains os.system() call that could run arbitrary commands
2. suspicious_model.pkl (pos 52): [WARNING] Dangerous pickle deserialization
Why: Could execute code when the model loads
- Code execution attacks in Pickle, PyTorch, NumPy, and Joblib files
- Model backdoors with hidden functionality or suspicious weight patterns
- Embedded secrets — API keys, tokens, and credentials in model weights or metadata
- Network indicators — URLs, IPs, and socket usage that could enable data exfiltration
- Archive exploits — path traversal, symlink attacks in ZIP/TAR/7z files
- Unsafe ML operations — Lambda layers, custom ops, TorchScript/JIT, template injection
- Supply chain risks — tampering, license violations, suspicious configurations
ModelAudit includes 30 specialized scanners covering model, archive, and configuration formats:
| Format | Extensions | Risk |
|---|---|---|
| Pickle | .pkl, .pickle, .dill |
HIGH |
| PyTorch | .pt, .pth, .ckpt, .bin |
HIGH |
| Joblib | .joblib |
HIGH |
| NumPy | .npy, .npz |
HIGH |
| TensorFlow | .pb, SavedModel dirs |
MEDIUM |
| Keras | .h5, .hdf5, .keras |
MEDIUM |
| ONNX | .onnx |
MEDIUM |
| XGBoost | .bst, .model, .ubj |
MEDIUM |
| SafeTensors | .safetensors |
LOW |
| GGUF/GGML | .gguf, .ggml |
LOW |
| JAX/Flax | .msgpack, .flax, .orbax, .jax |
LOW |
| TFLite | .tflite |
LOW |
| ExecuTorch | .ptl, .pte |
LOW |
| TensorRT | .engine, .plan |
LOW |
| PaddlePaddle | .pdmodel, .pdiparams |
LOW |
| OpenVINO | .xml |
LOW |
| Skops | .skops |
HIGH |
| PMML | .pmml |
LOW |
Plus scanners for ZIP, TAR, 7-Zip, OCI layers, Jinja2 templates, JSON/YAML metadata, manifests, and text files.
View complete format documentation
Scan models directly from remote registries and cloud storage:
# Hugging Face
modelaudit https://huggingface.co/gpt2
modelaudit hf://microsoft/DialoGPT-medium
# Cloud storage
modelaudit s3://bucket/model.pt
modelaudit gs://bucket/models/
# MLflow registry
modelaudit models:/MyModel/Production
# JFrog Artifactory (files and folders)
# Auth: export JFROG_API_TOKEN=...
modelaudit https://company.jfrog.io/artifactory/repo/model.pt
modelaudit https://company.jfrog.io/artifactory/repo/models/
# DVC-tracked models
modelaudit model.dvcHF_TOKENfor private Hugging Face repositoriesAWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY(and optionalAWS_SESSION_TOKEN) for S3GOOGLE_APPLICATION_CREDENTIALSfor GCSMLFLOW_TRACKING_URIfor MLflow registry accessJFROG_API_TOKENorJFROG_ACCESS_TOKENfor JFrog Artifactory- Store credentials in environment variables or a secrets manager, and never commit tokens/keys.
# Everything (recommended)
pip install modelaudit[all]
# Core only (pickle, numpy, archives)
pip install modelaudit
# Specific frameworks
pip install modelaudit[tensorflow,pytorch,h5,onnx,safetensors]
# CI/CD environments
pip install modelaudit[all-ci]
# Docker
docker run --rm -v "$(pwd)":/app ghcr.io/promptfoo/modelaudit:latest model.pkl--format {text,json,sarif} Output format (default: text)
--output FILE Write results to file
--strict Fail on warnings, scan all file types
--sbom FILE Generate CycloneDX SBOM
--stream Download, scan, and delete files one-by-one (saves disk)
--max-size SIZE Size limit (e.g., 10GB)
--timeout SECONDS Override scan timeout
--dry-run Preview what would be scanned
--verbose / --quiet Control output detail
--blacklist PATTERN Additional patterns to flag
--no-cache Disable result caching
--cache-dir DIR Set cache directory for downloads and scan results
--progress Force progress display
0: No security issues detected1: Security issues detected2: Scan errors
ModelAudit includes telemetry for product reliability and usage analytics.
- Collected metadata can include command usage, scan timing, scanner/file-type usage, issue severity/type aggregates, and model path or URL identifiers.
- Model files are scanned locally and ModelAudit does not upload model binary contents as telemetry events.
- Telemetry is disabled automatically in CI/test environments and in editable development installs by default.
Opt out explicitly with either environment variable:
export PROMPTFOO_DISABLE_TELEMETRY=1
# or
export NO_ANALYTICS=1To opt in during editable/development installs:
export MODELAUDIT_TELEMETRY_DEV=1# JSON for CI/CD pipelines
modelaudit model.pkl --format json --output results.json
# SARIF for code scanning platforms
modelaudit model.pkl --format sarif --output results.sarif- Run
modelaudit doctor --show-failedto list unavailable scanners and missing optional deps. - If
pipinstalls an older release, verify Python is3.10+(python --version). - For additional troubleshooting and cloud auth guidance, see:
- Full docs — setup, configuration, and advanced usage
- Usage examples — CI/CD integration, remote scanning, SBOM generation
- Supported formats — detailed scanner documentation
- Support policy — supported Python/OS versions and maintenance policy
- Security model and limitations — what ModelAudit does and does not guarantee
- Compatibility matrix — file formats vs optional dependencies
- Offline/air-gapped guide — secure operation without internet access
- Scanner contributor quickstart — safe workflow for new scanner development
- Troubleshooting — run
modelaudit doctor --show-failedto check scanner availability
MIT License — see LICENSE for details.
