Enterprise-level, modular, high-performance HTTP/HTTPS application-layer stress and traffic amplification analysis tool.
This is NOT a DDoS tool. For authorized targets and penetration testing / capacity planning only.
Türkçe: README.tr.md — Aynı içerik ve ayrıntılı kullanım örnekleri Türkçe.
git clone https://github.com/cumakurt/zirdeli.git
cd zirdeli
make install
python -m zirdeli https://www.example.com --accept-responsibility --skip-stressReports are written to reports/. See Install and Usage below.
Cuma KURT – cumakurt@gmail.com
https://www.linkedin.com/in/cuma-kurt-34414917/
https://github.com/cumakurt/zirdeli
- Python 3.11+
- Config:
config.yaml(optional) or env varsZIRDELI_* - brotli: for sites that use
Content-Encoding: br(included in dependencies)
pip install -e .
# or
pip install -r requirements.txt && pip install -e .For a one-command setup (create venv and install): make install. See Makefile for all targets.
python -m zirdeli <URL>- URL (required): Start URL of the target domain, e.g.
https://www.example.com. The tool crawls only same-domain URLs, then runs amplification analysis, ranks top N URLs, runs a stress test, and generates HTML + JSON reports.
| Option | Short | Description |
|---|---|---|
--config PATH |
-c |
Use a specific YAML config file |
--skip-stress |
— | Only crawl and report; do not run stress |
--fresh |
— | Ignore cached crawl data and always run a new crawl |
--stress-mode MODE |
— | resource_aware (default), max_traffic, until_failure, ramp_up |
--aggressive |
— | Turbo: high concurrency, no RPS cap, no kill-switch (authorized only) |
--stress-all-urls |
— | Stress all discovered URLs instead of only top N |
--force |
-f |
Force mode: until_failure + toughest scenarios, no time limit; Ctrl+C to stop (authorized only) |
--accept-responsibility |
— | Skip legal consent prompt (e.g. CI / non-interactive); you still accept responsibility |
--version |
-v |
Print version and exit |
Runs: crawl → amplification → rank top 100 → stress (60s default) → HTML + JSON report.
python -m zirdeli https://www.example.comWhat happens:
- Crawl: Fetches the start URL, discovers same-domain links (from HTML, sitemaps, common paths), and stores per-URL metrics (bytes sent/received, status codes, response times).
- Amplification: Computes
amplification_ratio = bytes_received / bytes_sentfor each URL. - Ranking: Scores URLs by amplification, response size, and 200 OK stability; keeps top N (default 100).
- Stress: Runs load test for
duration_sec(default 60s) inresource_awaremode (concurrency adapts to CPU/RAM, RPS cap, kill-switch). - Report: Writes
reports/report_<target>_<YYYYMMDD_HHMMSS>.htmland.json.
Example console output:
zirdeli – Resilience & L7 assessment
Target: https://www.example.com
Crawl …
Crawl done: 150 URLs.
Amplification & rank …
Ranked: 100 URLs.
Cache: .zirdeli_data/www_example_com.json
Stress: 60s, mode=resource_aware …
Stress 60/60s ETA 0s | Requests 12000 | RPS 200 | Errors 0
Stress done: 12000 requests | 200 RPS | 0 errors
Reports …
Done: report_www_example_com_20260205_143022.html | report_www_example_com_20260205_143022.json
Override defaults with a YAML file. Path can be absolute or relative.
python -m zirdeli https://www.example.com --config config.yamlMinimal custom config (my_config.yaml):
crawler:
max_depth: 3
max_urls_per_domain: 500
stress:
duration_sec: 30.0
max_concurrent: 200
reporting:
top_n_urls: 50
output_dir: my_reportsThen run:
python -m zirdeli https://www.example.com -c my_config.yamlReports will be written to my_reports/, crawl will be limited to 500 URLs and depth 3, stress will run 30s with up to 200 concurrent connections, and only top 50 URLs will be stressed.
Useful when you only want to discover URLs and see amplification/ranking, without generating load.
python -m zirdeli https://www.example.com --skip-stressUse case: Quick reconnaissance: which endpoints have high amplification, how many same-domain URLs exist, and what the link map looks like. No stress metrics (RPS, errors, timeline) will appear in the report.
By default, if crawl data already exists for the domain, the tool prompts: Use existing data? [Y/n]. To always run a new crawl and skip the prompt (e.g. in scripts or after site changes), use --fresh:
python -m zirdeli https://www.example.com --freshUse case: You updated the site and want fresh discovery and metrics; or you run in CI and do not want interactive prompts.
Default: resource_aware
- Concurrency adapts to CPU/RAM (increase when CPU < threshold, decrease when RAM > threshold).
- RPS cap and kill-switch (stops if local RAM gets too high) apply unless
--aggressiveis used.
python -m zirdeli https://www.example.com
# same as:
python -m zirdeli https://www.example.com --stress-mode resource_awaremax_traffic
- Fixed high concurrency (up to
max_concurrent_cap), no dynamic scaling. Use when you want maximum sustained load from the client.
python -m zirdeli https://www.example.com --stress-mode max_trafficuntil_failure
- Runs until the target starts failing: stops when error rate or consecutive failures reach the configured threshold (e.g.
failure_threshold_ratio: 0.5,consecutive_failures_to_stop: 10). Good for finding “when does the service break?”.
python -m zirdeli https://www.example.com --stress-mode until_failureramp_up
- Increases RPS step by step (e.g. +100 RPS every 30s) until the target starts failing. Report will include ramp-up final RPS and breaking-point style summary.
python -m zirdeli https://www.example.com --stress-mode ramp_upDisables kill-switch and RPS cap; uses high concurrency (e.g. 5000+), shorter request timeout for fail-fast. Use only on authorized targets.
python -m zirdeli https://www.example.com --aggressiveOften combined with --stress-mode max_traffic or until_failure for maximum load. You can also set aggressive: true in config.yaml under stress:.
By default, only the top N ranked URLs (e.g. 100) are stressed. To stress every discovered URL (more endpoints, more traffic):
python -m zirdeli https://www.example.com --stress-all-urlsUse case: Capacity test across the whole site, not just the “riskiest” N URLs.
- Fresh crawl + max_traffic + custom config:
python -m zirdeli https://www.example.com --fresh --stress-mode max_traffic --config config.yaml- Crawl only, custom report dir (via config):
# config.yaml
reporting:
output_dir: reports/productionpython -m zirdeli https://www.example.com --skip-stress -c config.yaml- Version:
python -m zirdeli --version
# or
python -m zirdeli -v- CI / non-interactive (no legal prompt):
python -m zirdeli https://www.example.com --accept-responsibility --fresh --skip-stress- After a crawl, data is saved under
reporting.cache_dir(default:.zirdeli_data) per domain, e.g..zirdeli_data/www_example_com.json. - When you run the same site again without
--fresh, the tool asks: Use existing data? [Y/n] — default Y (reuse cache and run stress/report). Answer n to run a new crawl. - Use
--freshto skip the prompt and always start a new crawl (e.g. in scripts). - In non-interactive environments (CI, cron), use
--accept-responsibilityso the legal prompt is skipped; the run will proceed without asking "Accept? [y/N]".
Build:
docker build -t zirdeli .Run (basic):
docker run --rm zirdeli https://www.example.comRun with resource limits (recommended in production):
docker run --rm --memory=1g --cpus=1 zirdeli https://www.example.comRun with custom config (mount YAML):
docker run --rm -v "$(pwd)/config.yaml:/app/config.yaml" zirdeli https://www.example.com --config /app/config.yamlRun with fresh crawl and skip stress:
docker run --rm zirdeli https://www.example.com --fresh --skip-stressReports in your current directory
By default the container runs as its own user; if you only mount ./reports, you can get Permission denied when it writes. Run the container as your host user so the mounted directories are writable:
# From the directory where you want reports (e.g. ~/myproject)
mkdir -p reports .zirdeli_data
docker run --rm --user "$(id -u):$(id -g)" \
-v "$(pwd)/reports:/app/reports" \
-v "$(pwd)/.zirdeli_data:/app/.zirdeli_data" \
zirdeli https://www.example.com --accept-responsibility--user "$(id -u):$(id -g)": run as your host user so the process can write to the mountedreports(and cache.zirdeli_data).-v "$(pwd)/reports:/app/reports": app writes to/app/reports, so files appear in./reportson the host.- After the run, open
./reports/report_<domain>_<timestamp>.html.
Example: one-shot run with reports in current dir
mkdir -p reports .zirdeli_data
docker run --rm --user "$(id -u):$(id -g)" \
-v "$(pwd)/reports:/app/reports" \
-v "$(pwd)/.zirdeli_data:/app/.zirdeli_data" \
zirdeli https://www.example.com --accept-responsibility
ls reports/With custom config (config and reports in current dir):
mkdir -p reports .zirdeli_data
docker run --rm --user "$(id -u):$(id -g)" \
-v "$(pwd)/config.yaml:/app/config.yaml" \
-v "$(pwd)/reports:/app/reports" \
-v "$(pwd)/.zirdeli_data:/app/.zirdeli_data" \
zirdeli https://www.example.com --config /app/config.yaml --accept-responsibilityThe project includes a Makefile for quick setup and Docker runs with reports in your current directory.
| Target | Description |
|---|---|
make install |
Create .venv and install the package (editable) |
make install-dev |
Install with dev dependencies (pytest, coverage, ruff) |
make test |
Run pytest |
make test-cov |
Run pytest with coverage report |
make docker-build |
Build Docker image zirdeli |
make docker-run |
Run with Docker; reports are written to ./reports in the current directory |
make docker-run-config |
Same as docker-run but mount ./config.yaml as well |
make clean |
Remove .venv, __pycache__, .pytest_cache |
Reports in current directory via Makefile:
# From the directory where you want reports
make docker-build
make docker-run URL=https://www.example.com
# Reports appear in ./reports/Optional: make docker-run URL=https://example.com SKIP_STRESS=1 (crawl only, no stress). Use make docker-run-config if you have a config.yaml in the current directory.
Create a virtual environment at project root and install the package: python -m venv .venv then .venv/bin/pip install -e ".[dev]". Then:
# Run all tests (use project venv: .venv/bin/python -m pytest)
python -m pytest tests/ -v
# With coverage (install dev deps: pip install -e ".[dev]")
python -m pytest tests/ -v --cov=src/zirdeli --cov-report=term-missingAll limits are configurable via config.yaml or environment variables (ZIRDELI_CRAWLER_*, ZIRDELI_STRESS_*, ZIRDELI_REPORT_*, etc.). See config.yaml in the repo for defaults.
Example env overrides:
export ZIRDELI_STRESS_DURATION_SEC=120
export ZIRDELI_REPORT_TOP_N_URLS=50
python -m zirdeli https://www.example.comreports/report_<target>_<YYYYMMDD_HHMMSS>.html– Interactive HTML report (target domain + timestamp; each scan produces a unique file). Contains link map, top risky URLs, amplification charts, resource timeline, stress metrics, and resilience notes.reports/report_<target>_<YYYYMMDD_HHMMSS>.json– Machine-readable JSON (same basename) for the same data.
- Crawler: Same-domain only, URL canonicalization, param flood and loop protection, optional robots.txt, sitemaps and common path probing.
- Amplification:
amplification_ratio = bytes_received / bytes_sentper URL. - Scoring: Top N URLs by amplification, response size, stable 200 OK, dynamic content potential.
- Stress: Multiple modes; worker count independent of URL count for maximum traffic.
- resource_aware (default): Dynamic concurrency by CPU/RAM, max RPS, kill-switch, random headers.
- max_traffic: Full concurrency (max_concurrent_cap up to 20k); aggressive = turbo (min 5000 concurrent), no RPS cap, 15s fail-fast timeout.
- until_failure: Run until error rate or consecutive failures hit threshold (target down).
- ramp_up: Increase RPS step by step until target starts failing.
- When crawl discovers few or no URLs, stress still runs on the target URL (homepage) only so the test does not skip.
- Reporting: HTML + JSON with domain link map, top risky URLs, resource timeline, stress mode and stopped reason.
This project is licensed under the GNU General Public License v3.0 or later (GPL-3.0-or-later). You may use, modify, and distribute the software under the terms of the GPL. See the LICENSE file for the full text.