You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Custom: let the user specify rows, iterations, and number of concurrent databases
22
-
4.**RLS mode** — with RLS (requires user tokens) or without RLS
22
+
4.**Operations per iteration** — how many UPDATE and DELETE operations to perform each iteration:
23
+
-`NUM_UPDATES`: number of UPDATE operations per iteration (default: 1). Each UPDATE runs `UPDATE <table> SET value = value + 1;` affecting all rows.
24
+
-`NUM_DELETES`: number of DELETE operations per iteration (default: 1). Each DELETE runs `DELETE FROM <table> WHERE rowid IN (SELECT rowid FROM <table> ORDER BY RANDOM() LIMIT 10);` removing 10 random rows. Set to 0 to skip deletes entirely.
25
+
- Propose defaults of 1 update and 1 delete. The user can set 0 deletes for update-only tests.
26
+
5.**RLS mode** — with RLS (requires user tokens) or without RLS
23
27
5.**Table schema** — offer simple default or custom:
24
28
```sql
25
29
CREATETABLEtest_sync (id TEXTPRIMARY KEY, user_id TEXTNOT NULL DEFAULT '', name TEXT, value INTEGER);
@@ -34,6 +38,8 @@ Save these as variables:
34
38
-`ROWS` (number of rows per iteration)
35
39
-`ITERATIONS` (number of delete/insert/update cycles)
36
40
-`NUM_DBS` (number of concurrent databases)
41
+
-`NUM_UPDATES` (number of UPDATE operations per iteration, default 1)
42
+
-`NUM_DELETES` (number of DELETE operations per iteration, default 1; 0 to skip)
37
43
38
44
### Step 2: Setup SQLiteCloud Database and Table
39
45
@@ -106,15 +112,15 @@ Create a bash script at `/tmp/stress_test_concurrent.sh` that:
106
112
2.**Defines a worker function** that runs in a subshell for each database:
107
113
- Each worker logs all output to `/tmp/sync_concurrent_<N>.log`
108
114
- Each iteration does:
109
-
a. **UPDATE all/some rows**(e.g., `UPDATE <table> SET value = value + 1;`)
110
-
b. **DELETE a few rows**(e.g., `DELETE FROM <table> WHERE rowid IN (SELECT rowid FROM <table> ORDER BY RANDOM() LIMIT 10);`)
115
+
a. **UPDATE**— run `UPDATE <table> SET value = value + 1;` repeated `NUM_UPDATES` times (skip if 0)
116
+
b. **DELETE**— run `DELETE FROM <table> WHERE rowid IN (SELECT rowid FROM <table> ORDER BY RANDOM() LIMIT 10);` repeated `NUM_DELETES` times (skip if 0)
111
117
c. **Sync using the 3-step send/check/check pattern:**
112
118
1.`SELECT cloudsync_network_send_changes();` — send local changes to the server
113
119
2.`SELECT cloudsync_network_check_changes();` — ask the server to prepare a payload of remote changes
114
120
3. Sleep 1 second (outside sqlite3, between two separate sqlite3 invocations)
115
121
4.`SELECT cloudsync_network_check_changes();` — download the prepared payload, if any
116
122
- Each sqlite3 session must: `.load` the extension, call `cloudsync_network_init()`/`cloudsync_network_init_custom()`, `cloudsync_network_set_apikey()`/`cloudsync_network_set_token()` (depending on RLS mode), do the work, call `cloudsync_terminate()`
117
-
-**Timing**: Log the wall-clock execution time (in milliseconds) for each `cloudsync_network_send_changes()`, `cloudsync_network_check_changes()` call. Use bash `date +%s%3N`before and after each sqlite3 invocation that calls a network function, and compute the delta. Log lines like: `[DB<N>][iter <I>] send_changes: 123ms`, `[DB<N>][iter <I>] check_changes_1: 45ms`, `[DB<N>][iter <I>] check_changes_2: 67ms`
123
+
-**Timing**: Log the wall-clock execution time (in milliseconds) for each `cloudsync_network_send_changes()`, `cloudsync_network_check_changes()` call. Define a `now_ms()` helper function at the top of the script and use it before and after each sqlite3 invocation that calls a network function, computing the delta. On **macOS**, `date` does not support `%3N` (nanoseconds) — use `python3 -c 'import time; print(int(time.time()*1000))'` instead. On **Linux**, `date +%s%3N` works fine. The script should detect the platform and define `now_ms()` accordingly. Log lines like: `[DB<N>][iter <I>] send_changes: 123ms`, `[DB<N>][iter <I>] check_changes_1: 45ms`, `[DB<N>][iter <I>] check_changes_2: 67ms`
118
124
- Include labeled output lines like `[DB<N>][iter <I>] updated count=<C>, deleted count=<D>` for grep-ability
119
125
120
126
3.**Launches all workers in parallel** using `&` and collects PIDs
@@ -151,21 +157,29 @@ After the test completes, provide a detailed breakdown:
151
157
152
158
After all workers have terminated, perform a **final sync on every local database** to ensure all databases converge to the same state. Then verify data integrity.
153
159
154
-
1.**Final sync loop** (max 10 retries): Repeat the following until all local databases have the same row count, or the retry limit is reached:
160
+
**IMPORTANT — RLS mode changes what "convergence" means:** When RLS is enabled, each user can only see their own rows. Databases belonging to different users will have different row counts and different data — this is correct behavior. All convergence and integrity checks must therefore be scoped **per user group** (i.e., only compare databases that share the same userId/token).
161
+
162
+
1.**Final sync loop** (max 10 retries): Repeat the following until convergence is achieved within each user group, or the retry limit is reached:
155
163
a. For each local database (sequentially):
156
164
- Load the extension, call `cloudsync_network_init`/`cloudsync_network_init_custom`, authenticate with `cloudsync_network_set_apikey`/`cloudsync_network_set_token`
157
165
- Run `SELECT cloudsync_network_sync(100, 10);` to sync remaining changes
158
166
- Call `cloudsync_terminate()`
159
167
b. After syncing all databases, query `SELECT COUNT(*) FROM <table>` on each database
160
-
c. If all row counts are identical, convergence is achieved — break out of the loop
161
-
d. Otherwise, log the round number and the distinct row counts, then repeat from (a)
162
-
e. If the retry limit is reached without convergence, report it as a failure
168
+
c. **If RLS is disabled:** Check that all databases have the same row count. If so, convergence is achieved — break.
169
+
d. **If RLS is enabled:** Group databases by userId. Within each user group, check that all databases have the same row count. Convergence is achieved when every user group is internally consistent — break. Different user groups are expected to have different row counts.
170
+
e. Otherwise, log the round number and the distinct row counts (per group if RLS), then repeat from (a)
171
+
f. If the retry limit is reached without convergence, report it as a failure
163
172
164
-
2.**Row count verification**: Report the final row counts. All databases should have the same number of rows. Also check SQLiteCloud (as admin) for total row count.
173
+
2.**Row count verification**:
174
+
-**If RLS is disabled:** Report the final row counts. All databases should have the same number of rows.
175
+
-**If RLS is enabled:** Report row counts grouped by user. All databases within the same user group should have identical row counts. Different user groups may differ. Also verify that each database only contains rows matching its userId.
176
+
- In both cases, also check SQLiteCloud (as admin) for total row count.
165
177
166
-
3.**Row content verification**: Pick one random row ID from the first database (`SELECT id FROM <table> ORDER BY RANDOM() LIMIT 1;`). Then query that same row (`SELECT id, user_id, name, value FROM <table> WHERE id = '<random_id>';`) on **every** local database. Compare the results — all databases must return identical column values for that row. Report the row ID, the expected values, and any mismatches.
178
+
3.**Row content verification**:
179
+
-**If RLS is disabled:** Pick one random row ID from the first database. Query that row on every local database. All must return identical values.
180
+
-**If RLS is enabled:** For each user group, pick one random row ID from the first database in that group. Query that row on all databases in the same user group. All databases in the group must return identical values. Do NOT expect databases from other user groups to have this row — they should return empty (RLS blocks cross-user access).
167
181
168
-
4.If RLS is enabled, verify no cross-user data leakage.
182
+
4.**RLS cross-user leak check** (RLS mode only): For a sample of databases (e.g., one per user group), verify that `SELECT COUNT(*) FROM <table> WHERE user_id != '<expected_user_id>'` returns 0. Report any cross-user data leakage as a test failure.
169
183
170
184
## Output Format
171
185
@@ -194,15 +208,21 @@ If errors are found, include:
194
208
The test **PASSES** if:
195
209
1. All workers complete all iterations
196
210
2. Zero `error`, `locked`, `SQLITE_BUSY`, or HTTP 500 responses in any log
197
-
3. After the final sync, all local databases have the same row count
198
-
4. A randomly selected row has identical content across all local databases
211
+
3. After the final sync, databases converge:
212
+
-**Without RLS:** all local databases have the same row count
213
+
-**With RLS:** all databases within each user group have the same row count (different user groups may differ)
214
+
4. Row content is consistent:
215
+
-**Without RLS:** a randomly selected row has identical content across all local databases
216
+
-**With RLS:** a randomly selected row has identical content across all databases in the same user group; databases from other user groups correctly return empty for that row
217
+
5.**With RLS:** no cross-user data leakage (each database contains only rows matching its userId)
199
218
200
219
The test **FAILS** if:
201
220
1. Any worker crashes or fails to complete
202
221
2. Any `database is locked` or `SQLITE_BUSY` errors appear
203
222
3. Server returns 500 errors under concurrent load
204
-
4. Row counts differ across local databases after the final sync loop exhausts all retries
205
-
5. Row content differs across local databases (data corruption)
223
+
4. Row counts differ within the comparison scope (all DBs without RLS, same-user DBs with RLS) after the final sync loop exhausts all retries
224
+
5. Row content differs within the comparison scope (data corruption)
225
+
6.**With RLS:** any database contains rows belonging to a different userId (cross-user data leakage)
0 commit comments