Skip to content

Security scanner for AI/ML model files. Detects malicious code, backdoors, and vulnerabilities before deployment

License

Notifications You must be signed in to change notification settings

promptfoo/modelaudit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

540 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

ModelAudit

Secure your AI models before deployment. Static scanner that detects malicious code, backdoors, and security vulnerabilities in ML model files — without ever loading or executing them.

PyPI version Python versions Code Style: ruff License

ModelAudit scan results

Full Documentation | Usage Examples | Supported Formats

Quick Start

Requires Python 3.10+

pip install modelaudit[all]

# Scan a file or directory
modelaudit model.pkl
modelaudit ./models/

# Export results for CI/CD
modelaudit model.pkl --format json --output results.json
$ modelaudit suspicious_model.pkl

Files scanned: 1 | Issues found: 2 critical, 1 warning

1. suspicious_model.pkl (pos 28): [CRITICAL] Malicious code execution attempt
   Why: Contains os.system() call that could run arbitrary commands

2. suspicious_model.pkl (pos 52): [WARNING] Dangerous pickle deserialization
   Why: Could execute code when the model loads

What It Detects

  • Code execution attacks in Pickle, PyTorch, NumPy, and Joblib files
  • Model backdoors with hidden functionality or suspicious weight patterns
  • Embedded secrets — API keys, tokens, and credentials in model weights or metadata
  • Network indicators — URLs, IPs, and socket usage that could enable data exfiltration
  • Archive exploits — path traversal, symlink attacks in ZIP/TAR/7z files
  • Unsafe ML operations — Lambda layers, custom ops, TorchScript/JIT, template injection
  • Supply chain risks — tampering, license violations, suspicious configurations

Supported Formats

ModelAudit includes 30 specialized scanners covering model, archive, and configuration formats:

Format Extensions Risk
Pickle .pkl, .pickle, .dill HIGH
PyTorch .pt, .pth, .ckpt, .bin HIGH
Joblib .joblib HIGH
NumPy .npy, .npz HIGH
TensorFlow .pb, SavedModel dirs MEDIUM
Keras .h5, .hdf5, .keras MEDIUM
ONNX .onnx MEDIUM
XGBoost .bst, .model, .ubj MEDIUM
SafeTensors .safetensors LOW
GGUF/GGML .gguf, .ggml LOW
JAX/Flax .msgpack, .flax, .orbax, .jax LOW
TFLite .tflite LOW
ExecuTorch .ptl, .pte LOW
TensorRT .engine, .plan LOW
PaddlePaddle .pdmodel, .pdiparams LOW
OpenVINO .xml LOW
Skops .skops HIGH
PMML .pmml LOW

Plus scanners for ZIP, TAR, 7-Zip, OCI layers, Jinja2 templates, JSON/YAML metadata, manifests, and text files.

View complete format documentation

Remote Sources

Scan models directly from remote registries and cloud storage:

# Hugging Face
modelaudit https://huggingface.co/gpt2
modelaudit hf://microsoft/DialoGPT-medium

# Cloud storage
modelaudit s3://bucket/model.pt
modelaudit gs://bucket/models/

# MLflow registry
modelaudit models:/MyModel/Production

# JFrog Artifactory (files and folders)
# Auth: export JFROG_API_TOKEN=...
modelaudit https://company.jfrog.io/artifactory/repo/model.pt
modelaudit https://company.jfrog.io/artifactory/repo/models/

# DVC-tracked models
modelaudit model.dvc

Authentication Environment Variables

  • HF_TOKEN for private Hugging Face repositories
  • AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY (and optional AWS_SESSION_TOKEN) for S3
  • GOOGLE_APPLICATION_CREDENTIALS for GCS
  • MLFLOW_TRACKING_URI for MLflow registry access
  • JFROG_API_TOKEN or JFROG_ACCESS_TOKEN for JFrog Artifactory
  • Store credentials in environment variables or a secrets manager, and never commit tokens/keys.

Installation

# Everything (recommended)
pip install modelaudit[all]

# Core only (pickle, numpy, archives)
pip install modelaudit

# Specific frameworks
pip install modelaudit[tensorflow,pytorch,h5,onnx,safetensors]

# CI/CD environments
pip install modelaudit[all-ci]

# Docker
docker run --rm -v "$(pwd)":/app ghcr.io/promptfoo/modelaudit:latest model.pkl

CLI Options

--format {text,json,sarif}   Output format (default: text)
--output FILE                Write results to file
--strict                     Fail on warnings, scan all file types
--sbom FILE                  Generate CycloneDX SBOM
--stream                     Download, scan, and delete files one-by-one (saves disk)
--max-size SIZE              Size limit (e.g., 10GB)
--timeout SECONDS            Override scan timeout
--dry-run                    Preview what would be scanned
--verbose / --quiet          Control output detail
--blacklist PATTERN          Additional patterns to flag
--no-cache                   Disable result caching
--cache-dir DIR              Set cache directory for downloads and scan results
--progress                   Force progress display

Exit Codes

  • 0: No security issues detected
  • 1: Security issues detected
  • 2: Scan errors

Telemetry and Privacy

ModelAudit includes telemetry for product reliability and usage analytics.

  • Collected metadata can include command usage, scan timing, scanner/file-type usage, issue severity/type aggregates, and model path or URL identifiers.
  • Model files are scanned locally and ModelAudit does not upload model binary contents as telemetry events.
  • Telemetry is disabled automatically in CI/test environments and in editable development installs by default.

Opt out explicitly with either environment variable:

export PROMPTFOO_DISABLE_TELEMETRY=1
# or
export NO_ANALYTICS=1

To opt in during editable/development installs:

export MODELAUDIT_TELEMETRY_DEV=1

Output Examples

# JSON for CI/CD pipelines
modelaudit model.pkl --format json --output results.json

# SARIF for code scanning platforms
modelaudit model.pkl --format sarif --output results.sarif

Troubleshooting

Documentation

License

MIT License — see LICENSE for details.

About

Security scanner for AI/ML model files. Detects malicious code, backdoors, and vulnerabilities before deployment

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Contributors 12

Languages