Make the Brev CLI idiomatic, programmable, and agent-friendly. Users and AI agents should be able to compose commands using standard Unix patterns (|, grep, awk, jq) while also having structured output options for programmatic access.
Coding agents (Claude Code, Cursor, Cline, Aider, OpenCode, Clawdbot) are becoming the primary interface between developers and their tools. These agents prefer CLIs over APIs:
- Text-native - LLMs think in text; pipes and grep are natural
- Self-documenting -
--helpand tab completion beat reading API docs - Composable - Chain steps:
brev search | brev create | brev exec "setup.sh" - Learned from training - Agents already know Unix conventions from GitHub/Stack Overflow
Most GPU clouds have dashboards and APIs, but weak CLIs. A composable Brev CLI becomes the default for autonomous GPU workflows.
- Unix Idiomatic - Commands work naturally with pipes and standard tools
- Programmable - JSON output mode for all commands that return data
- Agentic - Claude Code skills can orchestrate complex workflows
- Composable - Output of one command feeds into input of another
- Commands detect when stdout is piped (
os.Stdout.Stat()) - Piped output: clean table format (no colors, no help text)
- Interactive output: colored, with contextual help
- Commands accept arguments directly OR from stdin
- Stdin is read line-by-line when piped
- First column of table input is parsed as the primary identifier
| Mode | Trigger | Format |
|---|---|---|
| Interactive | TTY | Colored table + help text |
| Piped | cmd | ... |
Plain table (greppable) |
| JSON | --json |
Structured JSON array |
- Filter flags (e.g.,
--min-disk) should propagate through pipes - Table output includes computed fields (e.g.,
TARGET_DISK) - JSON output includes all relevant fields
| Command | Stdin | Stdout (piped) | Status |
|---|---|---|---|
brev ls |
- | Plain table | ✅ |
brev ls orgs |
- | Plain table | ✅ |
brev search |
- | Plain table w/ TARGET_DISK | ✅ |
brev stop |
Instance names | Instance names | ✅ |
brev start |
Instance names | Instance names | ✅ |
brev delete |
Instance names | Instance names | ✅ |
brev create |
Instance types (table or JSON) | Instance names | ✅ |
brev shell |
- | - (interactive) | ✅ |
brev exec |
Instance names | Command stdout/stderr | ✅ |
brev open |
Instance names | Instance names | ✅ |
Non-interactive command execution for scripted and agentic workflows.
Run commands directly:
brev exec my-gpu "nvidia-smi"
brev exec my-gpu "python train.py && echo done"Run local scripts remotely (@filepath syntax):
brev exec my-gpu @setup.sh # Runs local setup.sh on remote
brev exec my-gpu @scripts/deploy.sh # Relative paths supportedMulti-instance support:
# Run on multiple instances
brev exec gpu-1 gpu-2 gpu-3 "nvidia-smi"
# Pipe from create
brev create my-cluster --count 3 | brev exec "nvidia-smi"
# Chain with other commands
brev ls | grep RUNNING | brev exec "df -h"Output for chaining: Outputs instance names after execution completes, enabling pipelines:
brev create my-gpu | brev exec "pip install torch" | brev exec "python train.py"Interactive SSH session to an instance. Use brev exec for non-interactive commands.
brev shell my-gpu # Interactive shell
brev shell $(brev create my-gpu) # Create and connect
brev shell my-gpu --host # SSH to host instead of containerOpen instances in editors/terminals with multi-instance and cross-platform support.
Editor options:
brev open my-gpu vscode # VS Code (default)
brev open my-gpu cursor # Cursor
brev open my-gpu vim # Vim over SSH
brev open my-gpu terminal # Terminal/SSH session
brev open my-gpu tmux # Tmux sessionMulti-instance support:
# Open multiple instances (each in separate window)
brev open gpu-1 gpu-2 gpu-3 cursor
# Pipe from create
brev create my-cluster --count 3 | brev open cursorOutput for chaining: Outputs instance names when piped, enabling pipelines:
# Create, open in editor, then run setup
brev create my-gpu | brev open cursor | brev exec "pip install -r requirements.txt"Cross-platform support:
- macOS: Terminal.app, iTerm2
- Linux: Default terminal emulator
- Windows/WSL: Fixed exec format errors
brev search --gpu-name H100 # Filter by GPU
brev search --min-vram 40 # Min VRAM per GPU
brev search --min-total-vram 80 # Min total VRAM
brev search --min-disk 500 # Min disk size (GB)
brev search --max-boot-time 5 # Max boot time (minutes)
brev search --stoppable # Can stop/restart
brev search --rebootable # Can reboot
brev search --flex-ports # Configurable firewallbrev ls --json
brev ls orgs --json
brev search --json# Find stoppable H100s with 500GB disk, create first match
brev search --min-disk 500 --stoppable | grep H100 | head -1 | brev create --name my-gpu# Stop all running instances
brev ls | grep RUNNING | awk '{print $1}' | brev stop
# Delete all stopped instances
brev ls | grep STOPPED | awk '{print $1}' | brev delete# Create, use, cleanup
brev search --gpu-name A100 | head -1 | brev create --name job-1 | brev exec "python train.py" && brev delete job-1# Get cheapest H100 with jq
brev search --json | jq '[.[] | select(.gpu_name == "H100")] | sort_by(.price_per_hour) | .[0]'The composable CLI is necessary but not sufficient for agentic use. Skills bridge the gap between:
- Raw CLI - Powerful but requires knowing exact flags and syntax
- Natural Language - How users actually describe intent
Without skills, an agent must:
- Know that
--min-total-vramexists (not--vram,--gpu-memory, etc.) - Remember flag combinations for common tasks
- Handle error messages and retry logic
- Understand which commands can be piped together
Skills encode this domain knowledge, turning "spin up a cheap GPU for testing" into the correct brev search --stoppable --sort price | head -1 | brev create pipeline.
The /brev-cli skill provides:
Natural Language → CLI Translation
- "Create an A100 instance for ML training" → selects appropriate flags
- "Find GPUs with 40GB VRAM under $2/hr" →
--min-total-vram 40+ price filter - "Stop all my running instances" →
brev ls | grep RUNNING | ... | brev stop
Context-Aware Defaults
- Knows common GPU requirements for ML workloads
- Suggests
--stoppablefor dev instances (cost savings) - Recommends disk sizes based on use case
Error Recovery
- Retries with fallback instance types on capacity errors
- Suggests alternatives when requested GPU unavailable
- Handles "instance already exists" gracefully
Workflow Orchestration
- Multi-step operations (create → wait → execute → cleanup)
- Monitors instance health during long-running jobs
- Streams logs and captures results
With composable CLI + skills, agents can autonomously:
- Provision - Search, filter, and create instances matching workload requirements
- Deploy - Stream code/data to instances via pipeable
cp - Execute - Run workloads via
brev exec, capture output - Monitor - Poll status via
brev ls --json, stream logs - Scale - Spin up parallel instances, distribute work
- Cleanup - Stop/delete instances, manage costs
User: "Train my model on an H100, save checkpoints every hour"
Agent:
1. brev search --gpu-name H100 --stoppable --min-disk 500 | head -1 | brev create --name training-job
2. brev wait training-job --state ready
3. tar czf - ./src | brev cp - training-job:/app/
4. brev exec training-job "cd /app && python train.py --checkpoint-interval 3600"
5. brev cp training-job:/app/checkpoints - | tar xzf - -C ./results/
6. brev delete training-job
The skill handles the translation, error recovery, and orchestration—the composable CLI makes each step possible.
brev logs my-gpu # Follow logs
brev logs my-gpu --since 5m # Last 5 minutes
brev logs my-gpu | grep ERROR # Filter logsbrev create --name my-gpu ... && brev wait my-gpu --state ready
brev stop my-gpu && brev wait my-gpu --state stoppedStream data directly through stdin/stdout without intermediate files. Uses - to indicate stdin/stdout (standard Unix convention).
Current behavior (requires temp files):
brev cp local.tar.gz my-gpu:/data/
brev cp my-gpu:/results/output.csv ./output.csvProposed pipeable behavior:
# Stream archive directly to instance
tar czf - ./data | brev cp - my-gpu:/data/archive.tar.gz
# Pipe file content to instance
cat model.pt | brev cp - my-gpu:/models/model.pt
# Stream from instance and process locally
brev cp my-gpu:/results/output.csv - | grep "success" > filtered.csv
# Transfer between instances without local storage
brev cp gpu-1:/checkpoint.pt - | brev cp - gpu-2:/checkpoint.ptAgentic use cases:
# Agent streams training data, captures results
cat dataset.jsonl | brev exec my-gpu "python train.py" > results.log
# Agent deploys code without temp files
tar czf - ./src | brev cp - my-gpu:/app/src.tar.gz
# Agent extracts specific results
brev cp my-gpu:/logs/metrics.json - | jq '.accuracy'