Skip to content

cumakurt/zirdeli

Repository files navigation

zirdeli

Enterprise-level, modular, high-performance HTTP/HTTPS application-layer stress and traffic amplification analysis tool.

This is NOT a DDoS tool. For authorized targets and penetration testing / capacity planning only.

Türkçe: README.tr.md — Aynı içerik ve ayrıntılı kullanım örnekleri Türkçe.


Quick start (GitHub)

git clone https://github.com/cumakurt/zirdeli.git
cd zirdeli
make install
python -m zirdeli https://www.example.com --accept-responsibility --skip-stress

Reports are written to reports/. See Install and Usage below.


Developer

Cuma KURTcumakurt@gmail.com
https://www.linkedin.com/in/cuma-kurt-34414917/
https://github.com/cumakurt/zirdeli


Requirements

  • Python 3.11+
  • Config: config.yaml (optional) or env vars ZIRDELI_*
  • brotli: for sites that use Content-Encoding: br (included in dependencies)

Install

pip install -e .
# or
pip install -r requirements.txt && pip install -e .

For a one-command setup (create venv and install): make install. See Makefile for all targets.


Usage

Basic command

python -m zirdeli <URL>
  • URL (required): Start URL of the target domain, e.g. https://www.example.com. The tool crawls only same-domain URLs, then runs amplification analysis, ranks top N URLs, runs a stress test, and generates HTML + JSON reports.

Quick reference

Option Short Description
--config PATH -c Use a specific YAML config file
--skip-stress Only crawl and report; do not run stress
--fresh Ignore cached crawl data and always run a new crawl
--stress-mode MODE resource_aware (default), max_traffic, until_failure, ramp_up
--aggressive Turbo: high concurrency, no RPS cap, no kill-switch (authorized only)
--stress-all-urls Stress all discovered URLs instead of only top N
--force -f Force mode: until_failure + toughest scenarios, no time limit; Ctrl+C to stop (authorized only)
--accept-responsibility Skip legal consent prompt (e.g. CI / non-interactive); you still accept responsibility
--version -v Print version and exit

Detailed usage examples

Example 1: Full pipeline (first run)

Runs: crawl → amplification → rank top 100 → stress (60s default) → HTML + JSON report.

python -m zirdeli https://www.example.com

What happens:

  1. Crawl: Fetches the start URL, discovers same-domain links (from HTML, sitemaps, common paths), and stores per-URL metrics (bytes sent/received, status codes, response times).
  2. Amplification: Computes amplification_ratio = bytes_received / bytes_sent for each URL.
  3. Ranking: Scores URLs by amplification, response size, and 200 OK stability; keeps top N (default 100).
  4. Stress: Runs load test for duration_sec (default 60s) in resource_aware mode (concurrency adapts to CPU/RAM, RPS cap, kill-switch).
  5. Report: Writes reports/report_<target>_<YYYYMMDD_HHMMSS>.html and .json.

Example console output:

  zirdeli  –  Resilience & L7 assessment
  Target: https://www.example.com

  Crawl …
  Crawl done: 150 URLs.
  Amplification & rank …
  Ranked: 100 URLs.
  Cache: .zirdeli_data/www_example_com.json
  Stress: 60s, mode=resource_aware …
  Stress  60/60s  ETA 0s  |  Requests  12000  |  RPS  200  |  Errors  0
  Stress done: 12000 requests  |  200 RPS  |  0 errors
  Reports …
  Done: report_www_example_com_20260205_143022.html  |  report_www_example_com_20260205_143022.json

Example 2: Using a config file

Override defaults with a YAML file. Path can be absolute or relative.

python -m zirdeli https://www.example.com --config config.yaml

Minimal custom config (my_config.yaml):

crawler:
  max_depth: 3
  max_urls_per_domain: 500
stress:
  duration_sec: 30.0
  max_concurrent: 200
reporting:
  top_n_urls: 50
  output_dir: my_reports

Then run:

python -m zirdeli https://www.example.com -c my_config.yaml

Reports will be written to my_reports/, crawl will be limited to 500 URLs and depth 3, stress will run 30s with up to 200 concurrent connections, and only top 50 URLs will be stressed.


Example 3: Crawl and report only (no stress)

Useful when you only want to discover URLs and see amplification/ranking, without generating load.

python -m zirdeli https://www.example.com --skip-stress

Use case: Quick reconnaissance: which endpoints have high amplification, how many same-domain URLs exist, and what the link map looks like. No stress metrics (RPS, errors, timeline) will appear in the report.


Example 4: Force a new crawl (ignore cache)

By default, if crawl data already exists for the domain, the tool prompts: Use existing data? [Y/n]. To always run a new crawl and skip the prompt (e.g. in scripts or after site changes), use --fresh:

python -m zirdeli https://www.example.com --fresh

Use case: You updated the site and want fresh discovery and metrics; or you run in CI and do not want interactive prompts.


Example 5: Stress modes

Default: resource_aware

  • Concurrency adapts to CPU/RAM (increase when CPU < threshold, decrease when RAM > threshold).
  • RPS cap and kill-switch (stops if local RAM gets too high) apply unless --aggressive is used.
python -m zirdeli https://www.example.com
# same as:
python -m zirdeli https://www.example.com --stress-mode resource_aware

max_traffic

  • Fixed high concurrency (up to max_concurrent_cap), no dynamic scaling. Use when you want maximum sustained load from the client.
python -m zirdeli https://www.example.com --stress-mode max_traffic

until_failure

  • Runs until the target starts failing: stops when error rate or consecutive failures reach the configured threshold (e.g. failure_threshold_ratio: 0.5, consecutive_failures_to_stop: 10). Good for finding “when does the service break?”.
python -m zirdeli https://www.example.com --stress-mode until_failure

ramp_up

  • Increases RPS step by step (e.g. +100 RPS every 30s) until the target starts failing. Report will include ramp-up final RPS and breaking-point style summary.
python -m zirdeli https://www.example.com --stress-mode ramp_up

Example 6: Aggressive mode (authorized use only)

Disables kill-switch and RPS cap; uses high concurrency (e.g. 5000+), shorter request timeout for fail-fast. Use only on authorized targets.

python -m zirdeli https://www.example.com --aggressive

Often combined with --stress-mode max_traffic or until_failure for maximum load. You can also set aggressive: true in config.yaml under stress:.


Example 7: Stress all discovered URLs

By default, only the top N ranked URLs (e.g. 100) are stressed. To stress every discovered URL (more endpoints, more traffic):

python -m zirdeli https://www.example.com --stress-all-urls

Use case: Capacity test across the whole site, not just the “riskiest” N URLs.


Example 8: Combining options

  • Fresh crawl + max_traffic + custom config:
python -m zirdeli https://www.example.com --fresh --stress-mode max_traffic --config config.yaml
  • Crawl only, custom report dir (via config):
# config.yaml
reporting:
  output_dir: reports/production
python -m zirdeli https://www.example.com --skip-stress -c config.yaml
  • Version:
python -m zirdeli --version
# or
python -m zirdeli -v
  • CI / non-interactive (no legal prompt):
python -m zirdeli https://www.example.com --accept-responsibility --fresh --skip-stress

Crawl cache

  • After a crawl, data is saved under reporting.cache_dir (default: .zirdeli_data) per domain, e.g. .zirdeli_data/www_example_com.json.
  • When you run the same site again without --fresh, the tool asks: Use existing data? [Y/n] — default Y (reuse cache and run stress/report). Answer n to run a new crawl.
  • Use --fresh to skip the prompt and always start a new crawl (e.g. in scripts).
  • In non-interactive environments (CI, cron), use --accept-responsibility so the legal prompt is skipped; the run will proceed without asking "Accept? [y/N]".

Docker

Build:

docker build -t zirdeli .

Run (basic):

docker run --rm zirdeli https://www.example.com

Run with resource limits (recommended in production):

docker run --rm --memory=1g --cpus=1 zirdeli https://www.example.com

Run with custom config (mount YAML):

docker run --rm -v "$(pwd)/config.yaml:/app/config.yaml" zirdeli https://www.example.com --config /app/config.yaml

Run with fresh crawl and skip stress:

docker run --rm zirdeli https://www.example.com --fresh --skip-stress

Reports in your current directory

By default the container runs as its own user; if you only mount ./reports, you can get Permission denied when it writes. Run the container as your host user so the mounted directories are writable:

# From the directory where you want reports (e.g. ~/myproject)
mkdir -p reports .zirdeli_data
docker run --rm --user "$(id -u):$(id -g)" \
  -v "$(pwd)/reports:/app/reports" \
  -v "$(pwd)/.zirdeli_data:/app/.zirdeli_data" \
  zirdeli https://www.example.com --accept-responsibility
  • --user "$(id -u):$(id -g)": run as your host user so the process can write to the mounted reports (and cache .zirdeli_data).
  • -v "$(pwd)/reports:/app/reports": app writes to /app/reports, so files appear in ./reports on the host.
  • After the run, open ./reports/report_<domain>_<timestamp>.html.

Example: one-shot run with reports in current dir

mkdir -p reports .zirdeli_data
docker run --rm --user "$(id -u):$(id -g)" \
  -v "$(pwd)/reports:/app/reports" \
  -v "$(pwd)/.zirdeli_data:/app/.zirdeli_data" \
  zirdeli https://www.example.com --accept-responsibility
ls reports/

With custom config (config and reports in current dir):

mkdir -p reports .zirdeli_data
docker run --rm --user "$(id -u):$(id -g)" \
  -v "$(pwd)/config.yaml:/app/config.yaml" \
  -v "$(pwd)/reports:/app/reports" \
  -v "$(pwd)/.zirdeli_data:/app/.zirdeli_data" \
  zirdeli https://www.example.com --config /app/config.yaml --accept-responsibility

Makefile

The project includes a Makefile for quick setup and Docker runs with reports in your current directory.

Target Description
make install Create .venv and install the package (editable)
make install-dev Install with dev dependencies (pytest, coverage, ruff)
make test Run pytest
make test-cov Run pytest with coverage report
make docker-build Build Docker image zirdeli
make docker-run Run with Docker; reports are written to ./reports in the current directory
make docker-run-config Same as docker-run but mount ./config.yaml as well
make clean Remove .venv, __pycache__, .pytest_cache

Reports in current directory via Makefile:

# From the directory where you want reports
make docker-build
make docker-run URL=https://www.example.com
# Reports appear in ./reports/

Optional: make docker-run URL=https://example.com SKIP_STRESS=1 (crawl only, no stress). Use make docker-run-config if you have a config.yaml in the current directory.


Testing

Create a virtual environment at project root and install the package: python -m venv .venv then .venv/bin/pip install -e ".[dev]". Then:

# Run all tests (use project venv: .venv/bin/python -m pytest)
python -m pytest tests/ -v

# With coverage (install dev deps: pip install -e ".[dev]")
python -m pytest tests/ -v --cov=src/zirdeli --cov-report=term-missing

Config

All limits are configurable via config.yaml or environment variables (ZIRDELI_CRAWLER_*, ZIRDELI_STRESS_*, ZIRDELI_REPORT_*, etc.). See config.yaml in the repo for defaults.

Example env overrides:

export ZIRDELI_STRESS_DURATION_SEC=120
export ZIRDELI_REPORT_TOP_N_URLS=50
python -m zirdeli https://www.example.com

Output

  • reports/report_<target>_<YYYYMMDD_HHMMSS>.html – Interactive HTML report (target domain + timestamp; each scan produces a unique file). Contains link map, top risky URLs, amplification charts, resource timeline, stress metrics, and resilience notes.
  • reports/report_<target>_<YYYYMMDD_HHMMSS>.json – Machine-readable JSON (same basename) for the same data.

Architecture

  • Crawler: Same-domain only, URL canonicalization, param flood and loop protection, optional robots.txt, sitemaps and common path probing.
  • Amplification: amplification_ratio = bytes_received / bytes_sent per URL.
  • Scoring: Top N URLs by amplification, response size, stable 200 OK, dynamic content potential.
  • Stress: Multiple modes; worker count independent of URL count for maximum traffic.
    • resource_aware (default): Dynamic concurrency by CPU/RAM, max RPS, kill-switch, random headers.
    • max_traffic: Full concurrency (max_concurrent_cap up to 20k); aggressive = turbo (min 5000 concurrent), no RPS cap, 15s fail-fast timeout.
    • until_failure: Run until error rate or consecutive failures hit threshold (target down).
    • ramp_up: Increase RPS step by step until target starts failing.
    • When crawl discovers few or no URLs, stress still runs on the target URL (homepage) only so the test does not skip.
  • Reporting: HTML + JSON with domain link map, top risky URLs, resource timeline, stress mode and stopped reason.

License

This project is licensed under the GNU General Public License v3.0 or later (GPL-3.0-or-later). You may use, modify, and distribute the software under the terms of the GPL. See the LICENSE file for the full text.

About

Enterprise-level, modular, high-performance HTTP/HTTPS application-layer stress and traffic amplification analysis tool. **This is NOT a DDoS tool.** For authorized targets and penetration testing / capacity planning only.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors