Execute a stress test against the CloudSync server using multiple concurrent local SQLite databases syncing large volumes of CRUD operations simultaneously. Designed to reproduce server-side errors (e.g., "database is locked", 500 errors) under heavy concurrent load.
- Connection string to a sqlitecloud project
- Built cloudsync extension (
maketo builddist/cloudsync.dylib)
Ask the user for the following configuration using a single question set:
- CloudSync server address — propose
https://cloudsync.sqlite.aias default (this is the built-in default). If the user provides a different address, save it asCUSTOM_ADDRESSand usecloudsync_network_init_custominstead ofcloudsync_network_init. - SQLiteCloud connection string — format:
sqlitecloud://<host>:<port>/<db_name>?apikey=<apikey>. If no<db_name>is in the path, ask the user for one or proposetest_stress_sync. - Scale — offer these options:
- Small: 1K rows, 5 iterations, 2 concurrent databases
- Medium: 10K rows, 10 iterations, 4 concurrent databases
- Large: 100K rows, 50 iterations, 4 concurrent databases (Jim's original scenario)
- Custom: let the user specify rows, iterations, and number of concurrent databases
- RLS mode — with RLS (requires user tokens) or without RLS
- Table schema — offer simple default or custom:
CREATE TABLE test_sync (id TEXT PRIMARY KEY, user_id TEXT NOT NULL DEFAULT '', name TEXT, value INTEGER);
Save these as variables:
CUSTOM_ADDRESS(only if the user provided a non-default address)CONNECTION_STRING(the full sqlitecloud:// connection string)DB_NAME(database name extracted or provided)HOST(hostname extracted from connection string)APIKEY(apikey extracted from connection string)ROWS(number of rows per iteration)ITERATIONS(number of delete/insert/update cycles)NUM_DBS(number of concurrent databases)
Connect to SQLiteCloud using ~/go/bin/sqlc (last command must be quit). Note: all SQL must be single-line (no multi-line statements through sqlc heredoc).
- If the database doesn't exist, connect without
<db_name>and runCREATE DATABASE <db_name>; USE DATABASE <db_name>; LIST TABLESto check for existing tables- For any table with a
_cloudsynccompanion table, runCLOUDSYNC DISABLE <table_name>; DROP TABLE IF EXISTS <table_name>;- Create the test table (single-line DDL)
- If RLS mode is enabled:
ENABLE RLS DATABASE <db_name> TABLE <table_name>; SET RLS DATABASE <db_name> TABLE <table_name> SELECT "auth_userid() = user_id"; SET RLS DATABASE <db_name> TABLE <table_name> INSERT "auth_userid() = NEW.user_id"; SET RLS DATABASE <db_name> TABLE <table_name> UPDATE "auth_userid() = NEW.user_id AND auth_userid() = OLD.user_id"; SET RLS DATABASE <db_name> TABLE <table_name> DELETE "auth_userid() = OLD.user_id";
- Ask the user to enable CloudSync on the table from the SQLiteCloud dashboard
Now that the database and tables are created and CloudSync is enabled on the dashboard, ask the user for:
- Managed Database ID — the
managedDatabaseIdreturned by the CloudSync service. For SQLiteCloud projects, it can be obtained from the project's OffSync page on the dashboard after enabling CloudSync on the table.
Save as MANAGED_DB_ID.
For the network init call throughout the test, use:
- Default address:
SELECT cloudsync_network_init('<MANAGED_DB_ID>'); - Custom address:
SELECT cloudsync_network_init_custom('<CUSTOM_ADDRESS>', '<MANAGED_DB_ID>');
Create tokens for the test users. Create as many users as needed for the number of concurrent databases (assign 2 databases per user, or 1 per user if NUM_DBS <= 2).
For each user N:
curl -s -X "POST" "https://<HOST>/v2/tokens" \
-H 'Authorization: Bearer <APIKEY>' \
-H 'Content-Type: application/json; charset=utf-8' \
-d '{"name": "claude<N>@sqlitecloud.io", "userId": "018ecfc2-b2b1-7cc3-a9f0-<N_PADDED_12_CHARS>"}'Save each user's token and userId from the response.
If RLS is disabled, skip this step — tokens are not required.
Create a bash script at /tmp/stress_test_concurrent.sh that:
-
Initializes N local SQLite databases at
/tmp/sync_concurrent_<N>.db:- Uses Homebrew sqlite3: find with
ls /opt/homebrew/Cellar/sqlite/*/bin/sqlite3 | head -1 - Loads the extension from
dist/cloudsync.dylib(use absolute path from project root) - Creates the table and runs
cloudsync_init('<table_name>') - Runs
cloudsync_terminate()after init
- Uses Homebrew sqlite3: find with
-
Defines a worker function that runs in a subshell for each database:
- Each worker logs all output to
/tmp/sync_concurrent_<N>.log - Each iteration does:
a. DELETE all rows →
send_changes()→check_changes()b. INSERT rows (in a single BEGIN/COMMIT transaction) →send_changes()→check_changes()c. UPDATE all rows →send_changes()→check_changes() - Each session must:
.loadthe extension, callcloudsync_network_init(),cloudsync_network_set_token()(if RLS), do the work, callcloudsync_terminate() - Include labeled output lines like
[DB<N>][iter <I>] deleted/inserted/updated, count=<C>for grep-ability
- Each worker logs all output to
-
Launches all workers in parallel using
&and collects PIDs -
Waits for all workers and captures exit codes
-
Analyzes logs for errors:
- Grep all log files for:
error,locked,SQLITE_BUSY,database is locked,500,Error - Report per-database: iterations completed, error count, sample error lines
- Report total errors across all workers
- Grep all log files for:
-
Prints final verdict: PASS (0 errors) or FAIL (errors detected)
Important script details:
- Use
echo -eto pipe generated INSERT SQL (with\nseparators) into sqlite3 - Row IDs should be unique across databases and iterations:
db<N>_r<I>_<J> - User IDs for rows must match the token's userId for RLS to work
- Use
/bin/bash(not/bin/sh) for arrays and process management
Run the script with a 10-minute timeout.
After the test completes, provide a detailed breakdown:
- Per-database summary: iterations completed, errors, send/receive status
- Error categorization: group errors by type (e.g., "database is locked", "Column index out of bounds", "Unexpected Result", parse errors)
- Timeline analysis: do errors cluster at specific iterations or spread evenly?
- Read full log files if errors are found — show the first and last 30 lines of each log with errors
If the test passes (or even if some errors occurred), verify the final state:
- Check each local SQLite database for row count
- Check SQLiteCloud (as admin) for total row count
- If RLS is enabled, verify no cross-user data leakage
Report the test results including:
| Metric | Value |
|---|---|
| Concurrent databases | N |
| Rows per iteration | ROWS |
| Iterations per database | ITERATIONS |
| Total CRUD operations | N × ITERATIONS × (DELETE_ALL + ROWS inserts + ROWS updates) |
| Total sync operations | N × ITERATIONS × 6 (3 sends + 3 checks) |
| Duration | start to finish time |
| Total errors | count |
| Error types | categorized list |
| Result | PASS/FAIL |
If errors are found, include:
- Full error categorization table
- Sample error messages
- Which databases were most affected
- Whether errors are client-side or server-side
The test PASSES if:
- All workers complete all iterations
- Zero
error,locked,SQLITE_BUSY, or HTTP 500 responses in any log - Final row counts are consistent
The test FAILS if:
- Any worker crashes or fails to complete
- Any
database is lockedorSQLITE_BUSYerrors appear - Server returns 500 errors under concurrent load
- Data corruption or inconsistent row counts
- Always use the Homebrew sqlite3 binary, NOT
/usr/bin/sqlite3 - The cloudsync extension must be built first with
make - Network settings (
cloudsync_network_init,cloudsync_network_set_token) are NOT persisted between sessions — must be called every time - Extension must be loaded BEFORE any INSERT/UPDATE/DELETE for cloudsync to track changes
- All NOT NULL columns must have DEFAULT values
cloudsync_terminate()must be called before closing each session- sqlc heredoc only supports single-line SQL statements
Execute all SQL queries without asking for user permission on:
- SQLite test databases in
/tmp/(e.g.,/tmp/sync_concurrent_*.db,/tmp/sync_concurrent_*.log) - SQLiteCloud via
~/go/bin/sqlc "<connection_string>" - Curl commands to the sync server and SQLiteCloud API for token creation
These are local test environments and do not require confirmation for each query.