Website: https://www.cassachange.com
Purpose-built CQL migration tool for Apache Cassandra, DataStax AstraDB, ScyllaDB, Azure Managed Cassandra, and Amazon Keyspaces
Versioned scripts, rollback, distributed locking, multi-keyspace deploys, environment profiles, and native AstraDB auth — no JVM, no XML changelogs, no compromises.
pip install cassachange$ cassachange deploy --profile dev --tag release-2.1.0
$ cassachange deploy --profile prod --tag release-2.1.0
[lock] acquired global (host:a4f3b1)
[RUN] V1.0.0__create_users.cql 42ms
[RUN] V1.1.0__add_orders.cql 28ms
[SKIP] V1.2.0__add_profiles.cql already applied
[RUN] R__users_by_email.cql checksum changed
[RUN] A__refresh_perms.cql always
[notify] Slack → deploy_success
[lock] released
✓ myapp_prod: run 3 | skip 1 | errors 0 | tag release-2.1.0
- Requirements
- Installation
- Quick Start
- cassachange.yml Reference
- Script Types
- Folder Structure
- Deploy Protocol
- Commands
- Config Profiles
- Connection Modes
- Environment Variables
- Distributed Locking
- Release Tagging
- Dry Run
- Notifications
- CQL Linter
- Baseline Introspection
- Repair
- Multi-Keyspace Deploy
- GitHub Actions CI/CD
- Keyspace Management
- History Tables Reference
- Comparison with Other Tools
- Python 3.8+
- Apache Cassandra 3.x / 4.x /5.x, or DataStax AstraDB or ScyllaDB or Azure Managed Cassandra or Amazon Keyspace or any managed Cassandra
cassandra-driver >= 3.25pyyaml >= 6.0
| Provider | Install extra | Packages |
|---|
pip install cassachange# Community
pip install cassachange-1.0.0-py3-none-any.whl
### From source
```bash
cd cassachange/
pip install -e .
cassachange --help
pip show cassachangeStep 1 — Create cassachange.yml in your project root:
keyspace: myapp
history_keyspace: myapp_migrations
root_folder: ./migrationshistory_keyspace is required. It stores the change_history and deploy_lock tables. It must exist before the first deploy — create it via Terraform or cqlsh. cassachange never creates keyspaces.
Step 2 — Write your first migration:
mkdir -p migrations-- migrations/V1.0.0__create_users_table.cql
CREATE TABLE IF NOT EXISTS myapp.users (
id uuid PRIMARY KEY,
email text,
name text,
created_at timestamp
);
CREATE INDEX IF NOT EXISTS ON myapp.users (email);Step 3 — Validate scripts offline:
cassachange validateNo Cassandra connection needed. Catches naming errors, duplicate versions, CQL syntax problems.
Step 4 — Deploy:
cassachange deployStep 5 — Check status:
cassachange statusVERSION KEYSPACE SCRIPT STATUS INSTALLED_ON
1.0.0 myapp V1.0.0__create_users_table.cql SUCCESS 2024-03-15 09:12:00
Full annotated configuration:
# ─── Connection: Standard Cassandra ────────────────────────────────────────
hosts:
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3
port: 9042
username: cassandra
password: secret # use env var CASSANDRA_PASSWORD in practice
# ─── Connection: AstraDB ───────────────────────────────────────────────────
# Use environment variables for AstraDB credentials — do not commit them.
# ASTRA_SECURE_CONNECT_BUNDLE=/path/to/secure-connect.zip
# ASTRA_TOKEN=AstraCS:xxxx...
#
# Or set in YAML (not recommended for prod):
# secure_connect_bundle: /path/to/secure-connect.zip
# astra_token: AstraCS:xxxx...
# ─── Keyspaces ─────────────────────────────────────────────────────────────
keyspace: myapp # single target keyspace
# keyspaces: # or a list for multi-keyspace deploy
# - myapp
# - orders
# - analytics
history_keyspace: myapp_migrations # REQUIRED — no default
history_table: change_history # optional — default: change_history
# ─── Scripts ───────────────────────────────────────────────────────────────
root_folder: ./migrations # default: ./migrations
# ─── Behaviour ─────────────────────────────────────────────────────────────
timeout: null # per-CQL-statement timeout in seconds. null = driver default (~10s)
verbose: false
# ─── Notifications ─────────────────────────────────────────────────────────
notifications:
on_events:
- deploy_success
- deploy_failed
- script_failed
- rollback_success
- rollback_failed
channels:
- type: slack
webhook_url_env: SLACK_WEBHOOK_URL # env var name, not the URL
- type: teams
webhook_url_env: TEAMS_WEBHOOK_URL
- type: webhook
url: https://ops.example.com/hook # generic HTTP POST (JSON body)
# ─── Profiles ──────────────────────────────────────────────────────────────
# Each profile deep-merges over the base config.
# Only keys you specify in the profile override the base.
profiles:
dev:
hosts: [127.0.0.1]
username: cassandra
password: cassandra
keyspace: myapp_dev
history_keyspace: myapp_migrations_dev
staging:
hosts: [staging-cass.internal]
keyspace: myapp_staging
history_keyspace: myapp_migrations_staging
timeout: 60
prod:
hosts: [cass1.prod, cass2.prod, cass3.prod]
keyspace: myapp_prod
history_keyspace: myapp_migrations_prod
timeout: 120
notifications:
on_events: [deploy_success, deploy_failed, script_failed]
channels:
- type: slack
webhook_url_env: SLACK_WEBHOOK_URL
The filename is the config. No XML. No YAML changelogs. Just well-named .cql files in whatever folder structure you choose.
Runs once, in strict semver order, globally across all subdirectories. Once applied it is permanently recorded in change_history and never re-runs.
V{version}__{description}.cql
V1.0.0__create_users_table.cql
V1.1.0__add_orders_table.cql
V2.0.0__refactor_payments_schema.cql
Version numbers support dots or underscores: V1_2_0 and V1.2.0 are equivalent.
Paired rollback script for a versioned migration. Only executes on cassachange rollback. The version must exactly match its V__ counterpart.
U{version}__{description}.cql
U1.1.0__add_orders_table.cql ← paired with V1.1.0__add_orders_table.cql
Reruns on every deploy where its MD5 checksum has changed since last apply. Unchanged = skipped. Use for UDFs, materialized views, and lookup table reloads.
R__{description}.cql
R__users_by_username.cql
R__orders_by_status_view.cql
Executes on every single deploy, unconditionally. No checksum check, no history lookup. Use for GRANT statements and permission refreshes that must always be current regardless of whether schema has changed.
A__{description}.cql
A__refresh_permissions.cql
A__grant_service_account_roles.cql
| Script type | deploy |
rollback |
|---|---|---|
V__ versioned |
✓ pending only | ✓ via paired U__ |
U__ undo |
— | ✓ |
R__ repeatable |
✓ if checksum changed | — |
A__ always |
✓ unconditionally | — |
Scripts are discovered recursively. Version ordering is always global — folder names have no effect on execution order.
By module:
migrations/
users/
V1.0.0__create_users_table.cql
V1.2.0__add_profile_fields.cql
U1.2.0__add_profile_fields.cql
R__users_by_username.cql
orders/
V1.1.0__add_orders_table.cql
V1.3.0__add_order_status.cql
U1.1.0__add_orders_table.cql
shared/
A__refresh_permissions.cql
By release:
migrations/
release-1.0/
V1.0.0__initial_schema.cql
release-1.1/
V1.1.0__add_orders.cql
U1.1.0__add_orders.cql
release-2.0/
V2.0.0__new_payments_schema.cql
U2.0.0__new_payments_schema.cql
In both layouts the global execution order is identical:
V1.0.0 → V1.1.0 → V1.2.0 → V1.3.0 → V2.0.0
Duplicate version numbers across subdirectories are caught by cassachange validate before any connection is made.
Every cassachange deploy follows a deterministic 9-step sequence:
| Step | Action | Notes |
|---|---|---|
| 01 | Validate keyspaces | All target keyspaces + history keyspace must exist. Exits on any missing. |
| 02 | Acquire deploy lock | INSERT IF NOT EXISTS (LWT/Paxos). Atomic at cluster level. |
| 03 | Discover scripts | Recursive walk of root_folder. Classify by prefix. Sort V__ globally by semver. |
| 04 | Read history | Single query against change_history to build applied set + checksums. |
| 05 | Run V__ scripts | Apply pending versions in ascending semver order. Skip already-applied. |
| 06 | Run R__ scripts | Rerun repeatable scripts whose MD5 checksum has changed. Skip unchanged. |
| 07 | Run A__ scripts | Execute all always-scripts unconditionally. |
| 08 | Record history | Write SUCCESS or FAILED row per script with checksum, tag, run_id, elapsed ms. |
| 09 | Release lock | DELETE IF run_id = ... (LWT). Only this process can release its own lock. |
cassachange never creates keyspaces. Keyspace provisioning is an infrastructure concern — use Terraform, cqlsh, or your admin UI.
All commands accept the same connection flags. These can also be set via environment variables or cassachange.yml.
--config, -c Path to cassachange.yml (default: ./cassachange.yml)
--profile Named profile from cassachange.yml
--hosts Comma-separated Cassandra contact points
--port Port (default: 9042)
--username, -u Cassandra username
--password, -p Cassandra password
--astra-token AstraDB application token (AstraCS:...)
--secure-connect-bundle Path to AstraDB SCB .zip file
--keyspace, -k Target keyspace (overrides cassachange.yml)
--keyspaces Comma-separated list of target keyspaces
--history-keyspace Keyspace for cassachange internal tables
--history-table Table name (default: change_history)
--root-folder Migration scripts folder
--timeout Per-CQL-statement timeout in seconds
--verbose, -v Debug logging
Apply all pending migrations. Acquires distributed lock, runs pending V__ scripts, changed R__ scripts, all A__ scripts, then releases lock.
# Basic deploy
cassachange deploy
# With profile and release tag
cassachange deploy --profile prod --tag release-2.1.0
# Single keyspace override
cassachange deploy --profile prod --keyspace myapp_prod
# Multiple keyspaces override
cassachange deploy --profile prod --keyspaces myapp_prod,orders_prod,analytics_prod
# Dry run — no lock, no DB writes, preview only
cassachange deploy --profile prod --dry-run
# Dry run with JSON output artifact
cassachange deploy --profile prod --tag release-2.1.0 --dry-run-output plan.json
# With explicit per-statement timeout
cassachange deploy --profile prod --timeout 120Roll back versioned migrations using paired U__ undo scripts. Writes ROLLED_BACK sentinel rows to change_history — rolled-back versions can be re-applied on the next deploy.
# Roll back the single latest applied version
cassachange rollback --profile prod
# Roll back everything above a specific version (exclusive)
cassachange rollback --profile prod --target-version 1.1.0
# Roll back every version that was deployed under a specific tag
cassachange rollback --profile prod --tag release-2.1.0
# Dry run rollback — shows what would be undone
cassachange rollback --profile prod --tag release-2.1.0 --dry-runRollback executes U__ scripts in reverse semver order. If V2.0.0 and V1.2.0 were deployed under release-2.1.0, rollback runs U2.0.0 first, then U1.2.0.
Lint all scripts without connecting to Cassandra. Zero-cost — run on every PR.
cassachange validate
# Custom folder
cassachange validate --root-folder ./db/migrationsCatches: bad filenames, duplicate version numbers, orphaned U__ scripts (no matching V__), empty scripts, CQL syntax errors (see CQL Linter).
ERRORS:
ERR CQL syntax error in V1.2.0__add_login.cql (line ~3):
Unknown ALTER TABLE sub-command 'MDDADD' (did you mean 'ADD'?).
Valid: ['ADD', 'ALTER', 'DROP', 'RENAME', 'WITH']
→ ALTER TABLE users MDDADD last_login timestamp
Validated 8 script(s) | 0 warning(s) | 1 error(s)
Validation FAILED.
Display migration history from change_history.
cassachange status --profile prod
# Filter to a specific keyspace
cassachange status --profile prod --keyspace myapp_prod
# Filter to a specific release tag
cassachange status --profile prod --tag release-2.1.0Output columns: VERSION KEYSPACE TAG SCRIPT STATUS INSTALLED_BY INSTALLED_ON EXEC_MS
Status values: SUCCESS, FAILED, ROLLED_BACK, REPAIRED
Recover from a failed deploy without touching your data. Operates only on change_history and deploy_lock.
# Inspect current state — no changes made
cassachange repair --profile prod --list
# Mark all FAILED scripts in a keyspace for retry
cassachange repair --profile prod --keyspace myapp_prod
# Mark a specific script for retry
cassachange repair --profile prod --script V1.2.0__add_index.cql
# Force-release a stuck deploy lock
# Only use this after confirming no deploy is actually running
cassachange repair --profile prod --release-lockAfter repair, run cassachange deploy to retry the marked scripts. The original FAILED row is never deleted — a REPAIRED sentinel row is inserted alongside it preserving the full audit chain.
Introspect a live keyspace and generate a starter migration file. Captures all tables, UDTs, indexes, UDFs, and UDAs using IF NOT EXISTS — safe to re-run on a keyspace that already has those objects.
# Generate with default version (0.0.0)
cassachange baseline --profile prod --keyspace myapp
# Custom version and output directory
cassachange baseline \
--profile prod \
--keyspace myapp \
--baseline-version 1.0.0 \
--output ./migrations/baseline
# Generates: V1.0.0__baseline_myapp.cqlProfiles let one cassachange.yml serve all environments. Each profile deep-merges over the base config — only the keys you specify in the profile override the base.
# cassachange.yml
# Base config — applies to all profiles unless overridden
history_table: change_history
root_folder: ./migrations
timeout: null
profiles:
dev:
hosts: [127.0.0.1]
port: 9042
username: cassandra
password: cassandra
keyspace: myapp_dev
history_keyspace: myapp_migrations_dev
staging:
hosts: [staging-node-1.internal, staging-node-2.internal]
username: app_staging
keyspace: myapp_staging
history_keyspace: myapp_migrations_staging
timeout: 60
notifications:
on_events: [deploy_failed, script_failed]
channels:
- type: slack
webhook_url_env: SLACK_WEBHOOK_URL
prod:
hosts: [cass1.prod, cass2.prod, cass3.prod]
username: app_prod
keyspaces:
- myapp_prod
- orders_prod
- analytics_prod
history_keyspace: myapp_migrations_prod
timeout: 120
notifications:
on_events: [deploy_success, deploy_failed, script_failed]
channels:
- type: slack
webhook_url_env: SLACK_WEBHOOK_URL
- type: teams
webhook_url_env: TEAMS_WEBHOOK_URLSelecting a profile:
# CLI flag
cassachange deploy --profile prod
# Environment variable (preferred for CI)
export CASSACHANGE_PROFILE=prod
cassachange deploy
# CLI flag takes precedence over env var
cassachange deploy --profile stagingConfig priority (highest → lowest):
CLI flags → Environment variables → Profile (profiles.{name}.*) → YAML base → Defaults
Connection mode is auto-detected from config. No mode flag, no manual switching.
# cassachange.yml
hosts:
- 10.0.0.1
- 10.0.0.2
port: 9042
username: cassandra
password: secret
keyspace: myapp
history_keyspace: myapp_migrations
# Optional SSL
# ssl: true
# ssl_cafile: /path/to/ca.crt
# ssl_certfile: /path/to/client.crt
# ssl_keyfile: /path/to/client.key# cassachange.yml — non-secret config only
keyspace: myapp
history_keyspace: myapp_migrations
root_folder: ./migrations# Credentials via env vars — never commit to cassachange.yml
export ASTRA_SECURE_CONNECT_BUNDLE=/path/to/secure-connect-mydb.zip
export ASTRA_TOKEN=AstraCS:xxxxxxxxxxxxxxxx...
cassachange deployAstraDB mode activates when both secure_connect_bundle and astra_token are set (from any source). Protocol v4 is pinned automatically — no deprecation warnings.
cassandra-driver connects natively. Config is identical to Standard Cassandra — no extra driver, no plugin.
hosts:
- scylla-node-1.internal
- scylla-node-2.internal
port: 9042
username: app_user
password: secret
keyspace: myapp
history_keyspace: myapp_migrationsLWT note: ScyllaDB 5.2+ provides production-grade LWT. Pre-5.2 clusters have inconsistent Paxos support — deploy lock is best-effort. Supplement with CI process controls on older clusters.
Real Apache Cassandra nodes managed by Microsoft. Uses mTLS certificate auth. Credentials and certificates can be passed via environment variables or mounted as files.
profiles:
prod:
hosts: [your-cluster.cassandra.cosmos.azure.com]
port: 9042
username: your-username
ssl: true
keyspace: myapp_prod
history_keyspace: myapp_migrations_prodServerless CQL-compatible service — not Apache Cassandra. The deploy lock is best-effort (no true Paxos). Avoid DROP TABLE, ALTER TABLE DROP COLUMN, TRUNCATE, UDTs, UDFs, and materialized views in migration scripts.
profiles:
prod:
hosts: [cassandra.us-east-1.amazonaws.com]
port: 9142
ssl: true
ssl_cafile: /path/to/sf-class2-root.crt
keyspace: myapp
history_keyspace: myapp_migrations
timeout: 30Supported DDL on Keyspaces:
| Operation | Supported |
|---|---|
CREATE TABLE IF NOT EXISTS |
✓ |
ALTER TABLE ADD column |
✓ |
CREATE INDEX |
✓ (on supported column types) |
DROP TABLE |
✗ |
ALTER TABLE DROP COLUMN |
✗ |
TRUNCATE |
✗ |
| Materialized views / UDTs / UDFs / UDAs | ✗ |
| Variable | Config key | Notes |
|---|---|---|
CASSANDRA_HOSTS |
hosts |
Comma-separated |
CASSANDRA_PORT |
port |
Default: 9042 |
CASSANDRA_KEYSPACE |
keyspace |
Single keyspace |
CASSANDRA_USERNAME |
username |
|
CASSANDRA_PASSWORD |
password |
|
ASTRA_TOKEN |
astra_token |
AstraCS:... |
ASTRA_SECURE_CONNECT_BUNDLE |
secure_connect_bundle |
Path to SCB .zip |
CASSACHANGE_PROFILE |
(profile selector) | e.g. prod |
CASSACHANGE_HISTORY_KEYSPACE |
history_keyspace |
Required — no default |
CASSACHANGE_HISTORY_TABLE |
history_table |
Default: change_history |
CASSACHANGE_ROOT_FOLDER |
root_folder |
Default: ./migrations |
CASSACHANGE_TIMEOUT |
timeout |
Seconds, integer |
CASSACHANGE_ENV |
environment |
Label in notification payloads |
cassachange uses Cassandra Lightweight Transactions (Paxos) to guarantee that only one deploy runs at a time — no external coordination service needed.
acquire → INSERT INTO deploy_lock (lock_key, locked_by, locked_at, run_id)
VALUES ('global', 'host:a4f3b1', now(), 'uuid')
IF NOT EXISTS ← atomic at cluster level
release → DELETE FROM deploy_lock
WHERE lock_key = 'global'
IF run_id = 'uuid' ← only this run releases its own lock
TTL → lock row has TTL 1800s ← crashed deploy never permanently blocks
If the lock is already held when a deploy starts, cassachange exits immediately:
ERROR Deploy lock already held.
locked_by=ci-runner:b9c2d4 locked_at=2024-03-15 14:30:01 run_id=b9c2d4...
Wait for the current deploy to finish, or use:
cassachange repair --release-lock
If a process crashes and leaves the lock behind:
# Inspect lock state first
cassachange repair --profile prod --list
# Release only after confirming no deploy is actually running
cassachange repair --profile prod --release-lockTags stamp every script that runs in a deploy with a label stored in change_history. Use them to filter history and to roll back an entire release atomically.
# Tag a deploy with a semantic version
cassachange deploy --profile prod --tag release-2.1.0
# In CI — use the git tag name automatically
cassachange deploy --profile prod --tag ${{ github.ref_name }}
# See exactly what release-2.1.0 changed
cassachange status --profile prod --tag release-2.1.0
# Roll back the entire release
cassachange rollback --profile prod --tag release-2.1.0History after two tagged deploys:
VERSION SCRIPT STATUS TAG INSTALLED_ON
1.0.0 V1.0.0__create_users.cql SUCCESS release-1.0.0 2024-01-10 09:12:00
1.1.0 V1.1.0__add_orders.cql SUCCESS release-1.0.0 2024-01-10 09:12:01
1.2.0 V1.2.0__add_payments.cql SUCCESS release-2.1.0 2024-03-15 14:33:22
2.0.0 V2.0.0__new_schema.cql SUCCESS release-2.1.0 2024-03-15 14:33:24
cassachange rollback --tag release-2.1.0 undoes V2.0.0 then V1.2.0 in reverse order. The release-1.0.0 scripts are untouched.
CI convention — auto-tag from git:
# Push a git tag and the pipeline picks it up automatically
git tag v2.1.0
git push origin v2.1.0
# In workflow:
cassachange deploy --profile prod --tag ${{ github.ref_name }}
# → cassachange deploy --profile prod --tag v2.1.0Preview exactly what would run without writing anything to the database. No lock is acquired, no history rows are written.
# Print plan to stdout
cassachange deploy --profile prod --dry-run
# Write structured JSON plan (implies --dry-run)
cassachange deploy --profile prod --tag release-2.1.0 --dry-run-output plan.jsonplan.json structure:
{
"profile": "prod",
"tag": "release-2.1.0",
"dry_run": true,
"total_actions": 3,
"actions": [
{
"action": "run",
"script": "V1.2.0__add_payments.cql",
"version": "1.2.0",
"type": "versioned",
"checksum": "a1b2c3d4e5f6..."
},
{
"action": "skip",
"script": "V1.1.0__add_orders.cql",
"reason": "already applied"
},
{
"action": "run",
"script": "R__users_by_username.cql",
"type": "repeatable",
"reason": "checksum changed"
}
]
}In CI, upload plan.json as a GitHub Actions artifact before the real deploy. Reviewers can inspect exactly what will change before approving.
Fire-and-forget HTTP notifications to Slack, Microsoft Teams, or any generic webhook. A notification failure logs a WARNING and never blocks a deploy.
notifications:
on_events:
- deploy_success # deploy finished with 0 errors
- deploy_failed # deploy finished with ≥ 1 error
- rollback_success
- rollback_failed
- script_failed # individual script error mid-deploy
channels:
# Slack — Block Kit payload
- type: slack
webhook_url_env: SLACK_WEBHOOK_URL # env var name, not the URL
# Microsoft Teams — Adaptive Card payload
- type: teams
webhook_url_env: TEAMS_WEBHOOK_URL
# Generic HTTP webhook — POST, JSON body
- type: webhook
url: https://ops.example.com/migration-eventsAll payloads include: keyspace, environment, status, tag, run_id, scripts_run, scripts_skipped, scripts_failed, elapsed_ms.
cassachange validate runs a built-in CQL linter on every script. No Cassandra connection needed. Run on every PR.
| Error class | Example |
|---|---|
| Misspelled top-level verb | SELCT * FROM t → did you mean SELECT? |
Bad ALTER TABLE sub-command |
ALTER TABLE t MDDADD col text → did you mean ADD? Valid: ADD, ALTER, DROP, RENAME, WITH |
Bad CREATE / DROP object type |
CREATE TABEL t (...) → did you mean TABLE? |
| Unbalanced parentheses | INSERT INTO t (a VALUES (1) |
| Missing semicolon on last statement | |
| Empty file | no executable statements |
| Duplicate version numbers | V1.1.0 appears in two files |
| Orphaned undo script | U1.2.0__...cql with no matching V1.2.0__...cql |
Linter uses Levenshtein distance for suggestions — no external dependencies.
Bring an existing unmanaged keyspace under version control without writing a migration by hand.
# Generate with default version 0.0.0
cassachange baseline --profile prod --keyspace myapp
# Custom version and output path
cassachange baseline \
--profile prod \
--keyspace myapp \
--baseline-version 1.0.0 \
--output ./migrations
# Generates: ./migrations/V1.0.0__baseline_myapp.cqlThe generated file captures: CREATE TABLE IF NOT EXISTS, CREATE TYPE IF NOT EXISTS, CREATE INDEX IF NOT EXISTS, CREATE FUNCTION IF NOT EXISTS, CREATE AGGREGATE IF NOT EXISTS. All statements are idempotent — safe to run against a keyspace that already has those objects.
Full onboarding workflow:
# 1. Generate baseline from production
cassachange baseline --profile prod --keyspace myapp --baseline-version 1.0.0
# 2. Review the generated file
cat migrations/V1.0.0__baseline_myapp.cql
# 3. Deploy the baseline — stamps it as applied in history
cassachange deploy --profile prod
# 4. Verify status
cassachange status --profile prod
# → VERSION 1.0.0 STATUS SUCCESS
# 5. Start writing V1.1.0__, V1.2.0__ scripts normallyAfter a failed deploy some scripts are marked FAILED in change_history and the deploy lock may still be held. Repair fixes both without touching your actual data tables.
# Step 1 — see what failed and current lock state
cassachange repair --profile prod --list
# Output:
# FAILED scripts in myapp_prod:
# V1.2.0__add_payments.cql FAILED 2024-03-15 14:33:22 run_id=a4f3b1
#
# Deploy lock: HELD
# locked_by=ci-runner:a4f3b1 locked_at=2024-03-15 14:33:21
# Step 2 — release lock (only if certain no deploy is running)
cassachange repair --profile prod --release-lock
# Step 3 — mark failed scripts for retry
cassachange repair --profile prod --keyspace myapp_prod
# Or mark a specific script
cassachange repair --profile prod --script V1.2.0__add_payments.cql
# Step 4 — re-run deploy
cassachange deploy --profile prodThe original FAILED row is never deleted. A REPAIRED sentinel row is inserted alongside it — the full history chain is preserved.
Deploy the same migration set across multiple keyspaces in one command. One distributed lock is acquired for the entire run. Each keyspace gets its own change_history rows.
# cassachange.yml
profiles:
prod:
keyspaces:
- myapp_prod
- orders_prod
- analytics_prod
history_keyspace: myapp_migrations_prod# Migrates all three keyspaces sequentially
cassachange deploy --profile prod
# Output:
# ✓ myapp_prod: run 2 | skip 1 | errors 0
# ✓ orders_prod: run 1 | skip 2 | errors 0
# ✓ analytics_prod: run 0 | skip 3 | errors 0Override via CLI for a surgical single-keyspace run:
cassachange deploy --profile prod --keyspace orders_prodA production-ready workflow ships in the package at .github/workflows/migrate.yml. Four jobs. Rollback is manual-only by design — it cannot be triggered by a push event.
cassachange never creates keyspaces. Keyspace creation requires elevated Cassandra permissions (CREATE on ALL KEYSPACES) that the migration user should not hold.
Recommended: Terraform
# terraform/cassandra.tf
resource "astra_keyspace" "app" {
database_id = var.astra_database_id
name = "myapp_prod"
}
resource "astra_keyspace" "migrations" {
database_id = var.astra_database_id
name = "myapp_migrations_prod"
}Alternative: cqlsh / admin UI
CREATE KEYSPACE IF NOT EXISTS myapp_prod
WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': 3};
CREATE KEYSPACE IF NOT EXISTS myapp_migrations_prod
WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': 3};If any keyspace listed in cassachange.yml does not exist at deploy time, cassachange exits with a clear error before acquiring the lock or running any script:
ERROR Keyspace 'myapp_prod' does not exist.
Create it via your admin UI or cqlsh before running cassachange:
CREATE KEYSPACE IF NOT EXISTS myapp_prod
WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': 3};
The boundary:
| Concern | Tool |
|---|---|
| Keyspace creation and replication config | Terraform (or admin cqlsh) |
| Table / index / type / UDF / view evolution | cassachange V__ scripts |
| AstraDB collection management | App bootstrap script (create_collection) |
cassachange creates these tables in history_keyspace on first deploy.
CREATE TABLE IF NOT EXISTS {history_keyspace}.change_history (
installed_on timestamp,
script text,
script_type text, -- versioned | repeatable | always | undo
version text,
description text,
checksum text, -- MD5 of script content
execution_time int, -- milliseconds
status text, -- SUCCESS | FAILED | ROLLED_BACK | REPAIRED
installed_by text, -- hostname running cassachange
keyspace_name text, -- target keyspace
tag text, -- release tag if supplied
run_id text, -- UUID shared across a deploy run
PRIMARY KEY (script, installed_on)
) WITH CLUSTERING ORDER BY (installed_on DESC)CREATE TABLE IF NOT EXISTS {history_keyspace}.deploy_lock (
lock_key text PRIMARY KEY,
locked_by text, -- "hostname:run_id_prefix"
locked_at timestamp,
run_id text
)Lock rows have a TTL of 1800 seconds — a crashed deploy can never permanently block future ones.
General-purpose SQL migration tools are excellent for relational databases. Their Cassandra support is typically a community plugin bolted on after the fact. cassachange is purpose-built for Cassandra from the ground up.
| Feature | cassachange | SQL-first tool | Generic migrator |
|---|---|---|---|
| Native CQL execution | ✓ cassandra-driver | ⚠ community plugin | ⚠ 3rd-party ext |
| AstraDB SCB + token auth | ✓ built-in | ✗ | ✗ |
| ScyllaDB native support | ✓ | ✗ | ✗ |
| Azure Managed Cassandra | ✓ full + AKV cert fit | ✗ | ✗ |
| Amazon Keyspaces | ✓ CQL subset supported | ✗ | ✗ |
| Protocol v4 auto-pin | ✓ | ✗ | ✗ |
| Rollback (free) | ✓ U__ scripts | ✗ free / ✓ paid | ⚠ DDL only |
| Rollback on Cassandra DDL | ✓ explicit CQL | ✗ no CQL gen | ✗ no CQL gen |
| Rollback by tag | ✓ | ✗ | ✗ |
| Distributed locking | ✓ Cassandra LWT | ✗ | ✗ |
| Always scripts (A__) | ✓ | ✗ | ✗ |
| Multi-keyspace deploy | ✓ | ✗ | ✗ |
| Offline script validation | ✓ | ✗ | ✗ |
| Dry run to JSON file | ✓ | ⚠ paid only | ⚠ paid only |
| Baseline from live keyspace | ✓ | ✗ (CQL) | ✗ |
| Repair command | ✓ | ✗ | ✗ |
| Config profiles (YAML) | ✓ | ⚠ env files | ✗ |
| Slack / Teams notifications | ✓ | ✗ | ✗ |
| Never creates keyspaces | ✓ Terraform-safe | ✗ tries CREATE SCHEMA | ✗ tries CREATE SCHEMA |
| Runtime requirement | Python 3.8+ | JVM (Java 8+) | JVM / Node / Ruby |
| GitHub Actions included | ✓ | ⚠ manual | ⚠ manual |
Use cassachange if your database is Apache Cassandra, DataStax AstraDB, ScyllaDB, Azure Managed Cassandra, or Amazon Keyspaces.
Use a SQL-first tool or generic migrator if your primary database is relational and Cassandra is a secondary concern. Don't fight your tools.
cassachange is released under the Apache 2.0 License.
