Skip to content

Commit ec4a915

Browse files
authored
Dev (#14)
* fix(network): cloudsync_network_check_changes must not return the nrows value in case of error `SELECT cloudysnc_network_check_changes();` was returning "Runtime error: 0" in case of error response from the cloudsync microservice instead of the real error message * feat(rls): add complete support for RLS with batch merge in cloudsync_payload_apply * Feat/add support for status endpoint (#10) * feat(network): add support for new status endpoint * refactor(network): structured JSON responses for sync functions. Example: {"send":{"status":"synced","localVersion":5,"serverVersion":5},"receive":{"rows":3,"tables":["tasks"]}} * Feat/network support for multi org cloudsync (#11) * Disable notnull prikey constraints (#12) * The cloudsync extension now enforces NULL primary key rejection at runtime (any write with a NULL PK returns an error), so the explicit NOT NULL constraint on primary key columns is no longer a schema requirement * test: add null primary key rejection tests for SQLite and PostgreSQL * docs: remove NOT NULL requirement from primary key definitions The extension now enforces NULL primary key rejection at runtime, so the explicit NOT NULL constraint on PK columns is no longer a schema requirement. Replace the "must be NOT NULL" guidance with a note about runtime enforcement. * docs: add first draft of PERFORMANCE.md and CHANGELOG.md * fix(postgresql): resolve commit_alter crash and BYTEA handling in column_text Guard savepoint commit/rollback against missing subtransactions to prevent segfault in autocommit mode. Add BYTEA support to database_column_text so encoded PKs are readable during refill_metatable after ALTER TABLE. Enable alter table sync test (31). * test: new alter table test for postgres * feat: update endpoints to use databaseMangedId for /v2/cloudsync api * feat(network)!: replace URL connection string with a UUID (managedDatabaseId) BREAKING CHANGE: cloudsync_network_init now accepts a UUID string instead of the previous URL string. URL connection strings are no longer accepted. The managed database identifier returned by the CloudSync service when a new database is registered for sync. For SQLiteCloud projects, this value can be obtained from the project's OffSync page on the dashboard. * docs: update docs for the new managedDatabaseId arg for cloudsync_network_init * docs(examples): update example for the new managedDatabaseId arg for cloudsync_network_init
1 parent b016cac commit ec4a915

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+5018
-1755
lines changed
Lines changed: 192 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
# Sync Stress Test with remote SQLiteCloud database
2+
3+
Execute a stress test against the CloudSync server using multiple concurrent local SQLite databases syncing large volumes of CRUD operations simultaneously. Designed to reproduce server-side errors (e.g., "database is locked", 500 errors) under heavy concurrent load.
4+
5+
## Prerequisites
6+
- Connection string to a sqlitecloud project
7+
- Built cloudsync extension (`make` to build `dist/cloudsync.dylib`)
8+
9+
## Test Configuration
10+
11+
### Step 1: Gather Parameters
12+
13+
Ask the user for the following configuration using a single question set:
14+
15+
1. **CloudSync server address** — propose `https://cloudsync.sqlite.ai` as default (this is the built-in default). If the user provides a different address, save it as `CUSTOM_ADDRESS` and use `cloudsync_network_init_custom` instead of `cloudsync_network_init`.
16+
2. **SQLiteCloud connection string** — format: `sqlitecloud://<host>:<port>/<db_name>?apikey=<apikey>`. If no `<db_name>` is in the path, ask the user for one or propose `test_stress_sync`.
17+
3. **Scale** — offer these options:
18+
- Small: 1K rows, 5 iterations, 2 concurrent databases
19+
- Medium: 10K rows, 10 iterations, 4 concurrent databases
20+
- Large: 100K rows, 50 iterations, 4 concurrent databases (Jim's original scenario)
21+
- Custom: let the user specify rows, iterations, and number of concurrent databases
22+
4. **RLS mode** — with RLS (requires user tokens) or without RLS
23+
5. **Table schema** — offer simple default or custom:
24+
```sql
25+
CREATE TABLE test_sync (id TEXT PRIMARY KEY, user_id TEXT NOT NULL DEFAULT '', name TEXT, value INTEGER);
26+
```
27+
28+
Save these as variables:
29+
- `CUSTOM_ADDRESS` (only if the user provided a non-default address)
30+
- `CONNECTION_STRING` (the full sqlitecloud:// connection string)
31+
- `DB_NAME` (database name extracted or provided)
32+
- `HOST` (hostname extracted from connection string)
33+
- `APIKEY` (apikey extracted from connection string)
34+
- `ROWS` (number of rows per iteration)
35+
- `ITERATIONS` (number of delete/insert/update cycles)
36+
- `NUM_DBS` (number of concurrent databases)
37+
38+
### Step 2: Setup SQLiteCloud Database and Table
39+
40+
Connect to SQLiteCloud using `~/go/bin/sqlc` (last command must be `quit`). Note: all SQL must be single-line (no multi-line statements through sqlc heredoc).
41+
42+
1. If the database doesn't exist, connect without `<db_name>` and run `CREATE DATABASE <db_name>; USE DATABASE <db_name>;`
43+
2. `LIST TABLES` to check for existing tables
44+
3. For any table with a `_cloudsync` companion table, run `CLOUDSYNC DISABLE <table_name>;`
45+
4. `DROP TABLE IF EXISTS <table_name>;`
46+
5. Create the test table (single-line DDL)
47+
6. If RLS mode is enabled:
48+
```sql
49+
ENABLE RLS DATABASE <db_name> TABLE <table_name>;
50+
SET RLS DATABASE <db_name> TABLE <table_name> SELECT "auth_userid() = user_id";
51+
SET RLS DATABASE <db_name> TABLE <table_name> INSERT "auth_userid() = NEW.user_id";
52+
SET RLS DATABASE <db_name> TABLE <table_name> UPDATE "auth_userid() = NEW.user_id AND auth_userid() = OLD.user_id";
53+
SET RLS DATABASE <db_name> TABLE <table_name> DELETE "auth_userid() = OLD.user_id";
54+
```
55+
7. Ask the user to enable CloudSync on the table from the SQLiteCloud dashboard
56+
57+
### Step 3: Get Managed Database ID
58+
59+
Now that the database and tables are created and CloudSync is enabled on the dashboard, ask the user for:
60+
61+
1. **Managed Database ID** — the `managedDatabaseId` returned by the CloudSync service. For SQLiteCloud projects, it can be obtained from the project's OffSync page on the dashboard after enabling CloudSync on the table.
62+
63+
Save as `MANAGED_DB_ID`.
64+
65+
For the network init call throughout the test, use:
66+
- Default address: `SELECT cloudsync_network_init('<MANAGED_DB_ID>');`
67+
- Custom address: `SELECT cloudsync_network_init_custom('<CUSTOM_ADDRESS>', '<MANAGED_DB_ID>');`
68+
69+
### Step 4: Get Auth Tokens (if RLS enabled)
70+
71+
Create tokens for the test users. Create as many users as needed for the number of concurrent databases (assign 2 databases per user, or 1 per user if NUM_DBS <= 2).
72+
73+
For each user N:
74+
```bash
75+
curl -s -X "POST" "https://<HOST>/v2/tokens" \
76+
-H 'Authorization: Bearer <APIKEY>' \
77+
-H 'Content-Type: application/json; charset=utf-8' \
78+
-d '{"name": "claude<N>@sqlitecloud.io", "userId": "018ecfc2-b2b1-7cc3-a9f0-<N_PADDED_12_CHARS>"}'
79+
```
80+
81+
Save each user's `token` and `userId` from the response.
82+
83+
If RLS is disabled, skip this step — tokens are not required.
84+
85+
### Step 5: Run the Concurrent Stress Test
86+
87+
Create a bash script at `/tmp/stress_test_concurrent.sh` that:
88+
89+
1. **Initializes N local SQLite databases** at `/tmp/sync_concurrent_<N>.db`:
90+
- Uses Homebrew sqlite3: find with `ls /opt/homebrew/Cellar/sqlite/*/bin/sqlite3 | head -1`
91+
- Loads the extension from `dist/cloudsync.dylib` (use absolute path from project root)
92+
- Creates the table and runs `cloudsync_init('<table_name>')`
93+
- Runs `cloudsync_terminate()` after init
94+
95+
2. **Defines a worker function** that runs in a subshell for each database:
96+
- Each worker logs all output to `/tmp/sync_concurrent_<N>.log`
97+
- Each iteration does:
98+
a. **DELETE all rows**`send_changes()``check_changes()`
99+
b. **INSERT <ROWS> rows** (in a single BEGIN/COMMIT transaction) → `send_changes()``check_changes()`
100+
c. **UPDATE all rows**`send_changes()``check_changes()`
101+
- Each session must: `.load` the extension, call `cloudsync_network_init()`, `cloudsync_network_set_token()` (if RLS), do the work, call `cloudsync_terminate()`
102+
- Include labeled output lines like `[DB<N>][iter <I>] deleted/inserted/updated, count=<C>` for grep-ability
103+
104+
3. **Launches all workers in parallel** using `&` and collects PIDs
105+
106+
4. **Waits for all workers** and captures exit codes
107+
108+
5. **Analyzes logs** for errors:
109+
- Grep all log files for: `error`, `locked`, `SQLITE_BUSY`, `database is locked`, `500`, `Error`
110+
- Report per-database: iterations completed, error count, sample error lines
111+
- Report total errors across all workers
112+
113+
6. **Prints final verdict**: PASS (0 errors) or FAIL (errors detected)
114+
115+
**Important script details:**
116+
- Use `echo -e` to pipe generated INSERT SQL (with `\n` separators) into sqlite3
117+
- Row IDs should be unique across databases and iterations: `db<N>_r<I>_<J>`
118+
- User IDs for rows must match the token's userId for RLS to work
119+
- Use `/bin/bash` (not `/bin/sh`) for arrays and process management
120+
121+
Run the script with a 10-minute timeout.
122+
123+
### Step 6: Detailed Error Analysis
124+
125+
After the test completes, provide a detailed breakdown:
126+
127+
1. **Per-database summary**: iterations completed, errors, send/receive status
128+
2. **Error categorization**: group errors by type (e.g., "database is locked", "Column index out of bounds", "Unexpected Result", parse errors)
129+
3. **Timeline analysis**: do errors cluster at specific iterations or spread evenly?
130+
4. **Read full log files** if errors are found — show the first and last 30 lines of each log with errors
131+
132+
### Step 7: Optional — Verify Data Integrity
133+
134+
If the test passes (or even if some errors occurred), verify the final state:
135+
136+
1. Check each local SQLite database for row count
137+
2. Check SQLiteCloud (as admin) for total row count
138+
3. If RLS is enabled, verify no cross-user data leakage
139+
140+
## Output Format
141+
142+
Report the test results including:
143+
144+
| Metric | Value |
145+
|--------|-------|
146+
| Concurrent databases | N |
147+
| Rows per iteration | ROWS |
148+
| Iterations per database | ITERATIONS |
149+
| Total CRUD operations | N × ITERATIONS × (DELETE_ALL + ROWS inserts + ROWS updates) |
150+
| Total sync operations | N × ITERATIONS × 6 (3 sends + 3 checks) |
151+
| Duration | start to finish time |
152+
| Total errors | count |
153+
| Error types | categorized list |
154+
| Result | PASS/FAIL |
155+
156+
If errors are found, include:
157+
- Full error categorization table
158+
- Sample error messages
159+
- Which databases were most affected
160+
- Whether errors are client-side or server-side
161+
162+
## Success Criteria
163+
164+
The test **PASSES** if:
165+
1. All workers complete all iterations
166+
2. Zero `error`, `locked`, `SQLITE_BUSY`, or HTTP 500 responses in any log
167+
3. Final row counts are consistent
168+
169+
The test **FAILS** if:
170+
1. Any worker crashes or fails to complete
171+
2. Any `database is locked` or `SQLITE_BUSY` errors appear
172+
3. Server returns 500 errors under concurrent load
173+
4. Data corruption or inconsistent row counts
174+
175+
## Important Notes
176+
177+
- Always use the Homebrew sqlite3 binary, NOT `/usr/bin/sqlite3`
178+
- The cloudsync extension must be built first with `make`
179+
- Network settings (`cloudsync_network_init`, `cloudsync_network_set_token`) are NOT persisted between sessions — must be called every time
180+
- Extension must be loaded BEFORE any INSERT/UPDATE/DELETE for cloudsync to track changes
181+
- All NOT NULL columns must have DEFAULT values
182+
- `cloudsync_terminate()` must be called before closing each session
183+
- sqlc heredoc only supports single-line SQL statements
184+
185+
## Permissions
186+
187+
Execute all SQL queries without asking for user permission on:
188+
- SQLite test databases in `/tmp/` (e.g., `/tmp/sync_concurrent_*.db`, `/tmp/sync_concurrent_*.log`)
189+
- SQLiteCloud via `~/go/bin/sqlc "<connection_string>"`
190+
- Curl commands to the sync server and SQLiteCloud API for token creation
191+
192+
These are local test environments and do not require confirmation for each query.

0 commit comments

Comments
 (0)