Quereus is a lightweight, TypeScript-native SQL engine inspired by SQLite, with a focus on in-memory data processing and extensibility via the virtual table (VTab) interface. It supports a rich subset of SQL for querying, manipulating, and joining data from virtual tables, with async operations and modern JavaScript/TypeScript idioms. Quereus is designed for use in Node.js, browsers, and other JS environments, and does not provide persistent file storage by default.
🚨 IMPORTANT: Key Departure from SQL Standard
Quereus sympathises with The Third Manifesto principles and defaults columns to NOT NULL unless explicitly specified otherwise. This is contrary to standard SQL where columns are nullable by default. This behavior can be controlled via the default_column_nullability pragma:
- Default behavior:
pragma default_column_nullability = 'not_null' - SQL standard behavior:
pragma default_column_nullability = 'nullable'
This design choice helps avoid the "billion-dollar mistake" of NULL by default while still allowing NULLs when explicitly needed.
Key features:
- Virtual Table Centric: All data access is via virtual tables, which can be backed by memory, JSON, or custom sources.
- In-Memory Focus: No built-in file storage; all tables are transient unless a VTab module provides persistence.
- Rich SQL Subset: Supports select, insert, update, delete, CTEs, joins, aggregates, subqueries, and more.
- Extensible: Register custom functions, collations, and virtual table modules.
- Asynchronous: Database operations are async/await compatible, allowing non-blocking I/O.
- Third Manifesto Aligned: Embraces principles like default NOT NULL columns and key-based addressing.
Quereus keeps traditional DDL fully intact. Declarative schema is an optional alternative for describing the desired end‑state in a single, order‑independent block. Modules continue to use DDL‑based interfaces; declarative workflows operate entirely in the engine and produce DDL.
Concepts:
- Schema: Named logical grouping of objects; may span multiple modules.
- Catalog: The set of objects owned by a module; may span multiple schemas.
- Diff: JSON representation of changes needed to align actual state with declared schema.
- Apply: Automatic execution of migration DDL statements.
Key Statements:
declare schema– Describes desired end‑state and stores declaration with optional seed data.diff schema– Compares declared schema with current state and returns JSON diff.apply schema– Executes the generated migration DDL, optionally applying seed data.explain schema– Returns the schema content hash for versioning.
declare schema schema_name
[version 'major.minor.patch']
[using (default_vtab_module = 'memory', default_vtab_args = '{}')]
{
-- Tables: use {...} or (...) for column definitions
table users {
id integer primary key,
email text not null unique,
name text not null,
created_at text not null default (datetime('now'))
}
-- Or with explicit USING clause
table sessions (
id text primary key,
user_id integer not null,
expires_at integer
) using memory;
table roles {
id integer primary key,
name text not null unique
}
table user_roles (
user_id integer not null,
role_id integer not null,
constraint pk_user_roles primary key (user_id, role_id),
constraint fk_user foreign key (user_id) references users(id),
constraint fk_role foreign key (role_id) references roles(id)
);
-- Indexes
index users_email on users(email);
-- Views
view v_user_roles as
select u.id as user_id, u.email, r.name as role
from users u join user_roles ur on u.id = ur.user_id
join roles r on ur.role_id = r.id;
-- Seed data: ( (row1_values), (row2_values), ... )
seed roles (
(1, 'admin'),
(2, 'viewer')
)
-- Or with explicit column names
seed users values (id, email, name) values
(1, 'admin@example.com', 'Admin'),
(2, 'viewer@example.com', 'Viewer');
-- Assertions: enforced at commit time
assertion positive_balance check (not exists (select 1 from users where balance < 0))
-- Future: domains, collations, and imports
-- domain email_address as text check (like(value, '%@%'));
-- collation nocase = nocase();
-- import schema auth from 'https://example.com/auth-schema.sql' cache 'auth@1' version '^2';
}-- Get migration DDL as result rows (one DDL statement per row)
diff schema main;
-- Returns rows like:
-- {"ddl": "create table users (...)"}
-- {"ddl": "drop table old_table"}
-- Returns no rows if schema is already aligned
-- Execute DDL yourself with custom migration logic
-- TypeScript example:
-- for await (const {ddl} of db.eval('diff schema main')) {
-- console.log('Executing:', ddl);
-- await db.exec(ddl);
-- // Insert custom backfill/transform logic here
-- }
-- Or use apply to execute automatically (no result rows)
apply schema main;
-- Apply with seed data (clears and repopulates)
apply schema main with seed;
-- Get schema hash for versioning
explain schema main;
-- Returns: {"info": "hash:a1b2c3d4e5f6"}
-- Future: versioned apply with options
apply schema main to version '1.0.0' options (
dry_run = false,
validate_only = false,
allow_destructive = false,
rename_policy = 'require-hint'
);Order Independence:
- Tables, indexes, and views can be declared in any order within the
{...}block. - Forward references are allowed (e.g., foreign keys to tables declared later).
Flexible Syntax:
- Column definitions accept brace syntax
{...}or traditional parentheses(...). - Identifiers are only quoted when they are reserved keywords or contain special characters.
Schema Diffing:
- Compares the declared schema against the current database catalog.
- Generates a JSON diff showing tables/views/indexes to create, drop, or alter.
- Produces canonical DDL statements for all required changes.
Migration Application:
apply schemaexecutes the migration DDL automatically.- Migrations are applied in safe order: drops first, then creates, then alters.
- Seed data application:
with seedclears existing data and inserts declared seed rows.
Versioning and Hashing:
- Schema declarations can include semantic versions.
explain schemacomputes a SHA-256 hash of the canonical schema representation.- Enables tracking schema changes and ensuring consistency across environments.
Safety:
- Seed data application is destructive (clears table before inserting).
- Future enhancements will add
allow_destructivegating for schema changes. - Rename hints and stable IDs (planned) will prevent accidental drops during renames.
Notes:
- Keywords
schema,version, andseedare contextual and don't conflict with column names or function calls likeschema(). - DDL remains the primary interface; declarative schema is a convenience layer that generates DDL.
- Modules are unaware of declarative schemas; they receive standard DDL commands.
The select statement retrieves data from one or more tables or views.
Syntax:
[ with [recursive] with_clause[,...] ]
select [distinct | all] select_expr [, select_expr ...]
[ from table_reference [, table_reference...] ]
[ where condition ]
[ group by expr [, expr...] ]
[ having condition ]
[ order by expr [asc | desc] [, expr [asc | desc]...] ]
[ limit count [offset skip] | limit skip, count ]
[ union [all] select_statement ]
+| [ intersect select_statement ]
+| [ except select_statement ]
+| [ diff select_statement ]
[ with schema schema_name [, schema_name...] ]Options:
with clause: Common Table Expressions (CTEs) for temporary named result setsdistinct: Removes duplicate rows from the result setall: Includes all rows (default behavior)select_expr: Column expressions to be returned;*for all columnsfrom: Tables, views, or subqueries to retrieve data fromwhere: Filters rows based on a conditiongroup by: Groups rows that have the same valueshaving: Filters groups based on a conditionorder by: Sorts the result setlimit/offset: Restricts the number of rows returnedunion/intersect/except/diff: Set operations combining two result setswith schema: Specifies an ordered search path for resolving unqualified table names (see section 2.1.1)
Set operations:
union all: Concatenation (bag semantics)union: Union with deduplication (set semantics)intersect: Common rows (set semantics)except: Rows in left not in right (set semantics)diff: Symmetric difference = (A except B) union (B except A) (set semantics)
Note: Chained set operations are right-associative:
A except B union Cevaluates asA except (B union C), not(A except B) union C. Use CTEs or subqueries to force left-to-right evaluation when needed.
Examples:
-- Basic select with where clause
select id, name, age from users where age > 21;
-- Select with join
select u.name, o.product
from users as u
inner join orders as o on u.id = o.userId
where o.status = 'shipped';
-- Group by with aggregates
select department, count(*) as employeeCount, avg(salary) as avgSalary
from employees
group by department
having count(*) > 5
order by avgSalary desc;
-- With CTE and union
with active_users as (
select * from users where status = 'active'
)
select name, email from active_users where age < 30
union all
select name, email from premium_users where subscriptionStatus = 'paid';
-- Symmetric difference (DIFF)
select value from set_a
diff
select value from set_b
order by value;
-- Table equality check using DIFF
select not exists(
select * from (
select * from a
diff
select * from b
)
) as tables_equal;
-- Query with explicit schema search path
select * from users, orders
with schema sales, main;Quereus supports flexible schema resolution through search paths. Unqualified table names are resolved by searching schemas in a specified order.
Resolution Hierarchy:
- Qualified names (
schema.table) - Always used exactly as specified - WITH SCHEMA clause - Per-query explicit search path
- PRAGMA schema_path - Session-level default search path
- Default schema - Typically
main
WITH SCHEMA Syntax:
SELECT ... FROM table1, table2
WITH SCHEMA schema1, schema2, schema3;The WITH SCHEMA clause specifies an ordered list of schemas to search when resolving unqualified table names. The first schema containing a matching table is used.
Examples:
-- Explicitly search sales schema, then main
SELECT * FROM orders, customers
WITH SCHEMA sales, main;
-- If 'orders' exists in 'sales', uses sales.orders
-- If 'customers' only exists in 'main', uses main.customers
-- Works with CTEs
-- Note: WITH SCHEMA applies only to the outer query.
-- The CTE (recent_orders) uses the connection/database default schema path.
WITH recent_orders AS (
SELECT * FROM orders WHERE date > date('now', '-7 days')
)
SELECT * FROM recent_orders
WITH SCHEMA sales, archive, main;
-- To apply schema path to the CTE query itself, use a nested WITH SCHEMA:
WITH recent_orders AS (
SELECT * FROM orders WHERE date > date('now', '-7 days')
WITH SCHEMA sales, archive
)
SELECT * FROM recent_orders;
-- DML operations also support WITH SCHEMA
UPDATE inventory SET quantity = quantity - 1
WHERE sku = 'ABC123'
WITH SCHEMA warehouse, main;
INSERT INTO logs (message) VALUES ('Order processed')
WITH SCHEMA audit, main
RETURNING id;
DELETE FROM temp_data WHERE expired = 1
WITH SCHEMA workspace, main;Error Messages:
When a table is not found, Quereus provides helpful diagnostics:
-- Table not in search path
SELECT * FROM products WITH SCHEMA sales, finance;
-- Error: Table 'products' not found in schema path: sales, finance
-- Did you mean 'main.products'?
-- Or add 'main' to your schema path?
-- Table doesn't exist anywhere
SELECT * FROM nonexistent WITH SCHEMA main, sales;
-- Error: Table 'nonexistent' not found in schema path: main, salesBest Practices:
- Use qualified names (
schema.table) when you need precision - Use
WITH SCHEMAfor cross-schema queries without qualification - Set
PRAGMA schema_pathfor session-wide defaults - Default to
mainschema for simple, single-schema applications
Order Independence:
The WITH CONTEXT and WITH SCHEMA clauses can appear in any order:
-- Both are valid:
INSERT INTO table VALUES (...) WITH CONTEXT (x = 1) WITH SCHEMA sales;
INSERT INTO table VALUES (...) WITH SCHEMA sales WITH CONTEXT (x = 1);
UPDATE table SET col = val WITH SCHEMA main WITH CONTEXT (x = 1);
DELETE FROM table WHERE id = 1 WITH CONTEXT (x = 1) WITH SCHEMA main;The insert statement adds new rows to a table.
Syntax:
[ with [recursive] with_clause[,...] ]
insert [or conflict_resolution] into table_name [(column [, column...])]
{ values (expr [, expr...]) [, (expr [, expr...])]... | select_statement }
[ with context (variable = expr [, ...]) ]
[ with schema schema_name [, schema_name...] ]
[ returning [qualifier.]expr [, [qualifier.]expr...] ]
conflict_resolution:
rollback | abort | fail | ignore | replace
upsert_clause:
on conflict [ (column [, column ...]) ] do nothing
| on conflict [ (column [, column ...]) ] do update set assignment [, assignment ...] [ where condition ]
assignment:
column = expressionOptions:
with clause: Common Table Expressions for use in the insertor conflict_resolution: Specifies how to handle constraint conflicts (see Conflict Resolution below)table_name: Target table for the insertioncolumn: Optional list of columns to insert intovalues: A list of value sets to insertselect_statement: A select query whose results are insertedupsert_clause: Specifies how to handle conflicts with fine-grained control (see UPSERT below)with context: Provides table-level parameters for defaults and constraints (see section 2.6.2)with schema: Specifies schema search path for resolving table names (see section 2.1.1)returning: Returns specified expressions from the inserted rows (supports NEW qualifier)
Conflict Resolution (OR clause):
When inserting a row that would violate a UNIQUE constraint (including PRIMARY KEY), the OR clause specifies how to handle the conflict:
OR ROLLBACK: Abort the current transaction and rollback all changesOR ABORT: Abort the current statement and rollback changes (default behavior)OR FAIL: Abort the current statement but do not rollback prior changes in the transactionOR IGNORE: Silently skip the row that would cause a conflictOR REPLACE: Delete the existing row that conflicts and insert the new row (destructive—loses unspecified column values)
Note: The OR clause and ON CONFLICT clause are mutually exclusive. Use OR REPLACE for simple full-row replacement, and ON CONFLICT DO UPDATE for surgical column-level updates.
The ON CONFLICT clause provides fine-grained control over conflict handling, allowing you to update specific columns rather than replacing the entire row.
Syntax:
insert into table_name (columns) values (...)
on conflict [ (conflict_columns) ] do nothing | do update set assignments [ where condition ]Conflict Target:
ON CONFLICT (col1, col2, ...)— Specifies which unique constraint to match. The columns must correspond to a PRIMARY KEY or UNIQUE constraint.ON CONFLICT(without columns) — Matches any unique constraint violation.
Actions:
DO NOTHING— Silently skip the conflicting row (equivalent toINSERT OR IGNORE)DO UPDATE SET col = expr, ...— Update specific columns on the existing row
Referencing Values:
NEW.columnorexcluded.column— References the value that was proposed for insertion (PostgreSQL compatibility viaexcluded)columnortable.column— References the current value in the existing row- The
WHEREclause can use both to conditionally apply updates
Key Differences from OR REPLACE:
| Feature | INSERT OR REPLACE |
ON CONFLICT DO UPDATE |
|---|---|---|
| Behavior | Deletes existing row, inserts new | Updates existing row in place |
| Unspecified columns | Lost (reset to defaults) | Preserved |
| Conditional update | Not supported | Supported via WHERE |
| Column-level control | No | Yes |
| Triggers | DELETE + INSERT | UPDATE |
Examples:
-- Basic insert with explicit columns
insert into users (name, email, age) values ('John', 'john@example.com', 35);
-- Multiple rows insert
insert into products (name, price, category)
values
('Keyboard', 49.99, 'Electronics'),
('Mouse', 29.99, 'Electronics'),
('Headphones', 99.99, 'Audio');
-- Insert from select
insert into active_users (id, name, email)
select id, name, email from users where last_login > date('now', '-30 days');
-- INSERT with RETURNING clause
insert into users (name, email)
values ('Alice', 'alice@example.com')
returning id, name, datetime('now') as created_at;
-- INSERT with RETURNING used in a larger query
select 'User created: ' || new_user.name as message
from (
insert into users (name, email)
values ('Bob', 'bob@example.com')
returning name
) as new_user;
-- With CTE
with recent_orders as (
select * from orders where order_date > date('now', '-7 days')
)
insert into order_summary (order_id, customer, total)
select id, customer_name, sum(price * quantity)
from recent_orders
group by id, customer_name
returning order_id, total;
-- INSERT OR REPLACE (full row replacement)
insert or replace into users (id, name, email, updated_at)
values (1, 'Alice', 'alice@example.com', datetime('now'));
-- If a user with id=1 exists, it is DELETED and replaced; otherwise, a new row is inserted
-- WARNING: Any columns not in the insert list are reset to defaults!
-- INSERT OR IGNORE (skip conflicts)
insert or ignore into tags (name)
values ('javascript'), ('typescript'), ('javascript');
-- Only inserts 'javascript' and 'typescript' once, skipping the duplicate
-- UPSERT: Insert or update specific columns (preserves other columns)
insert into users (id, name, email)
values (1, 'Alice', 'alice@example.com')
on conflict (id) do update set
name = NEW.name,
email = NEW.email;
-- If id=1 exists, updates only name and email; other columns (like created_at) are preserved
-- UPSERT with increment pattern
insert into vocabulary (word, count)
values ('hello', 1)
on conflict (word) do update set
count = count + 1;
-- Inserts with count=1, or increments existing count
-- UPSERT with conditional update (only update if newer)
insert into documents (id, content, version)
values (100, 'new content', 5)
on conflict (id) do update set
content = NEW.content,
version = NEW.version
where NEW.version > version;
-- Only updates if the new version is greater than existing
-- UPSERT with DO NOTHING (same as INSERT OR IGNORE)
insert into tags (name)
values ('javascript'), ('typescript'), ('javascript')
on conflict (name) do nothing;
-- UPSERT on composite key
insert into user_roles (user_id, role_id, granted_at)
values (1, 2, datetime('now'))
on conflict (user_id, role_id) do update set
granted_at = NEW.granted_at;
-- Multiple ON CONFLICT clauses (evaluated in order)
insert into products (id, sku, name, price)
values (1, 'ABC123', 'Widget', 9.99)
on conflict (id) do update set name = NEW.name, price = NEW.price
on conflict (sku) do update set price = NEW.price;
-- First matching conflict target wins
-- UPSERT with RETURNING
insert into counters (key, value)
values ('page_views', 1)
on conflict (key) do update set value = value + 1
returning key, value,
case when value = NEW.value then 'inserted' else 'updated' end as action;The update statement modifies existing rows in a table.
Syntax:
[ with [recursive] with_clause[,...] ]
update table_name
set column = expr [, column = expr...]
[ where condition ]
[ with context (variable = expr [, ...]) ]
[ with schema schema_name [, schema_name...] ]
[ returning [qualifier.]expr [, [qualifier.]expr...] ]Options:
with clause: Common Table Expressions for use in the updatetable_name: Table to be updatedset: Column assignments with new valueswhere: Optional condition to specify which rows to updatewith context: Provides table-level parameters for defaults and constraints (see section 2.6.2)with schema: Specifies schema search path for resolving table names (see section 2.1.1)returning: Returns specified expressions from the updated rows (supports OLD and NEW qualifiers)
Examples:
-- Simple update
update users set status = 'inactive' where last_login < date('now', '-90 days');
-- Multi-column update with RETURNING
update products
set price = price * 1.1,
updated_at = datetime('now')
where category = 'Electronics'
returning id, name, price, updated_at;
-- Update with OLD and NEW qualifiers
update employees
set salary = salary * 1.05
where performance_rating >= 4
returning id, OLD.salary as old_salary, NEW.salary as new_salary,
(NEW.salary - OLD.salary) as increase;
-- UPDATE with RETURNING clause
update users
set last_login = datetime('now')
where id = 42
returning id, name, last_login;
-- UPDATE with RETURNING used as table source
select 'Updated: ' || updated.name || ' to ' || updated.new_status as message
from (
update users
set status = 'premium', updated_at = datetime('now')
where subscription_type = 'paid'
returning name, status as new_status
) as updated;
-- Update with expression
update orders
set
total = (select sum(price * quantity) from order_items where order_id = orders.id),
status = case
when paid = 1 then 'completed'
else 'pending'
end
where order_date > date('now', '-30 days')
returning id, OLD.status, NEW.status, NEW.total;
-- With CTE
with discounted_items as (
select product_id, price * 0.8 as sale_price
from products
where category = 'Clearance'
)
update products
set price = di.sale_price
from discounted_items as di
where products.id = di.product_id
returning id, OLD.price as original_price, NEW.price as sale_price;The delete statement removes rows from a table.
Syntax:
[ with [recursive] with_clause[,...] ]
delete from table_name
[ where condition ]
[ with context (variable = expr [, ...]) ]
[ with schema schema_name [, schema_name...] ]
[ returning [qualifier.]expr [, [qualifier.]expr...] ]Options:
with clause: Common Table Expressions for use in the deletetable_name: Table to delete fromwhere: Optional condition to specify which rows to deletewith context: Provides table-level parameters for defaults and constraints (see section 2.6.2)with schema: Specifies schema search path for resolving table names (see section 2.1.1)returning: Returns specified expressions from the deleted rows (supports OLD qualifier)
Examples:
-- Simple delete
delete from users where status = 'deactivated';
-- DELETE with RETURNING clause
delete from users
where last_login < date('now', '-365 days')
returning id, name, email;
-- DELETE with RETURNING used for audit logging
insert into deleted_users_audit (user_id, name, deleted_at)
select deleted.id, deleted.name, datetime('now')
from (
delete from users
where status = 'spam'
returning id, name
) as deleted;
-- Delete with subquery
delete from products
where id in (
select product_id
from inventory
where stock = 0 and last_updated < date('now', '-180 days')
)
returning id, name, category;
-- With CTE
with old_orders as (
select id from orders where order_date < date('now', '-365 days')
)
delete from order_items
where order_id in (select id from old_orders);The RETURNING clause allows you to retrieve values from rows that were inserted, updated, or deleted in a DML operation. Quereus supports NEW and OLD qualifiers to distinguish between original and modified values.
Syntax:
returning result_column [, result_column...]
result_column:
{ * | [qualifier.]column_name | expression } [ [ as ] alias ]
qualifier:
{ NEW | OLD }INSERT Operations:
NEW: References the inserted values ✅OLD: Not allowed (will cause error) ❌- Unqualified columns: Default to NEW values
UPDATE Operations:
NEW: References the updated values ✅OLD: References the original values before update ✅- Unqualified columns: Default to NEW values
DELETE Operations:
OLD: References the deleted values ✅NEW: Not allowed (will cause error) ❌- Unqualified columns: Default to OLD values
Audit Trail with UPDATE:
-- Track all changes for audit purposes
update customer_profiles
set email = 'new.email@example.com', phone = '555-0123'
where customer_id = 42
returning
customer_id,
OLD.email as old_email,
NEW.email as new_email,
OLD.phone as old_phone,
NEW.phone as new_phone,
datetime('now') as changed_at;Calculating Differences:
-- Calculate price changes
update inventory
set quantity = quantity - 5, last_updated = datetime('now')
where product_id in (101, 102, 103)
returning
product_id,
OLD.quantity as stock_before,
NEW.quantity as stock_after,
OLD.quantity - NEW.quantity as items_sold,
NEW.last_updated;Conditional RETURNING with CASE:
-- Conditional logic in RETURNING clause
update user_accounts
set login_attempts = login_attempts + 1
where username = 'user123'
returning
user_id,
username,
NEW.login_attempts,
case
when NEW.login_attempts >= 5 then 'LOCKED'
when NEW.login_attempts >= 3 then 'WARNING'
else 'NORMAL'
end as account_status,
case
when OLD.login_attempts < 3 and NEW.login_attempts >= 3 then 'Security alert triggered'
else 'Login attempt recorded'
end as message;The create table statement defines a new table structure. Note that all tables are "without rowid" implicitly.
Syntax:
create [temp | temporary] table [if not exists] table_name (
column_definition [, column_definition...]
[, table_constraint...]
)
[using module_name [(module_args...)]]
[with tags (key = value [, ...])]Column Definition:
column_name [data_type] [column_constraint...] [with tags (key = value [, ...])]Column Constraints:
[constraint name]
{ primary key [asc | desc] [conflict_clause] [autoincrement]
| not null [conflict_clause]
| unique [conflict_clause]
| check [on {insert | update | delete}[,...]] (expr)
| default value
| collate collation_name
| references foreign_table [(column[,...])] [ref_actions]
| generated always as (expr) [stored | virtual] }
[with tags (key = value [, ...])]Table Constraints:
[constraint name]
{ primary key ([column [asc | desc][,...]]) [conflict_clause]
| unique (column[,...]) [conflict_clause]
| check [on {insert | update | delete}[,...]] (expr)
| foreign key (column[,...]) references foreign_table [(column[,...])] [ref_actions] }
[with tags (key = value [, ...])]Conflict Clause:
on conflict { rollback | abort | fail | ignore | replace }Options:
- If an empty key column list is provided, the table may have 0 or 1 rows.
temp/temporary: Creates a temporary tableif not exists: Creates the table only if it doesn't already existcolumn_definition: Defines a column with optional constraintstable_constraint: Defines a table-level constraintusing module_name: Specifies a virtual table module
Examples:
-- Basic table with constraints
create table employees (
id integer primary key,
name text not null,
email text unique collate nocase,
department text default 'General',
salary real check (salary >= 0),
hire_date text,
manager_id integer references employees(id)
);
-- Table with composite key and multiple constraints
create table order_items (
order_id integer,
product_id integer,
quantity integer not null check on insert (quantity > 0),
price real not null check (price >= 0),
discount real default 0 check (discount >= 0 and discount <= 1),
primary key (order_id, product_id),
foreign key (order_id) references orders(id),
foreign key (product_id) references products(id)
);
-- Memory-backed virtual table
create table cache (
key text primary key,
value blob,
expires_at integer
) using memory;
-- Table with generated (computed) columns
create table products (
id integer primary key,
base_price integer not null,
tax_rate real not null default 0.1,
total_price real generated always as (base_price * (1 + tax_rate)) stored,
label text generated always as ('Product #' || id) virtual
);Generated Columns:
Generated columns are computed from an expression over other columns in the same row:
STORED: The value is computed at INSERT/UPDATE time and persisted. Reads return the stored value directly.VIRTUAL: Semantically computed on read (currently stored identically to STORED; storage optimization is planned).- If neither
STOREDnorVIRTUALis specified,VIRTUALis the default. - Generated column expressions must be deterministic and may only reference non-generated columns of the same table.
- Cannot have both
DEFAULTandGENERATED ALWAYS ASon the same column. - Cannot INSERT into or UPDATE a generated column directly.
Quereus supports database-wide integrity assertions evaluated at COMMIT time.
Syntax:
create assertion assertion_name check (condition_expression);
drop assertion assertion_name;Behavior:
- Assertions are enforced at COMMIT. Any row produced by the stored violation query indicates a violation and the COMMIT fails with a constraint error (transaction rolled back).
- The
check (expr)is stored as a violation SQL:select 1 where not (expr). - Efficiency: The optimizer classifies each table reference instance in the violation query as row-specific (unique key fully covered) or global. If any changed base is global, run the violation SQL once. Otherwise, for row-specific references, the engine executes per changed primary key using prepared parameters (
pk0,pk1, ... for composite keys), early-exiting on the first violation.
Diagnostics:
- Use
explain_assertion(name)to introspect classification and prepared parameterization.
Examples:
-- Global-style assertion (aggregate)
create table t2 (id integer primary key) using memory;
create assertion a_global check ((select count(*) from t2) = (select count(*) from t2));
select exists(
select 1 from explain_assertion('a_global') where classification = 'global'
) as ok;
-- Row-specific assertion: PK equality reduces to row-specific
create table t1 (id integer primary key) using memory;
create assertion a_row check (exists (select 1 from t1 where id = 1));
select prepared_pk_params from explain_assertion('a_row') where classification = 'row' limit 1;Quereus supports table-level mutation context variables that provide per-operation parameters for default values and constraints. The primary use case is implementing application-specific security, rights management, and audit mechanisms using signatures, digests, and cryptographic verification.
Syntax:
create table table_name (
column_definitions...
) using module_name
with context (
variable_name data_type [null],
...
)DML Syntax:
insert into table_name [(columns...)]
with context variable = expression, ...
values (...) | select_statement
update table_name
with context variable = expression, ...
set column = value ...
delete from table_name
with context variable = expression, ...
where conditionKey Features:
- Context variables are declared in the table definition alongside columns
- Variables default to NOT NULL unless explicitly marked NULL
- Both unqualified (
varName) and qualified (context.varName) references supported - Context variables can be used in DEFAULT expressions and CHECK constraints
- Context values are evaluated once per statement, not per row
- Context is captured for deferred constraints and evaluated at COMMIT time
Examples:
Multi-Tenant Data Isolation:
-- Enforce tenant isolation at database level
create table tenant_records (
id integer primary key,
tenant_id text,
data text,
constraint tenant_check check (new.tenant_id = context.current_tenant_id)
) using memory
with context (
current_tenant_id text
);
-- Insert restricted to current tenant
insert into tenant_records (id, tenant_id, data)
with context current_tenant_id = 'tenant_abc'
values (1, 'tenant_abc', 'Private data'); -- Passes
-- Attempt to insert for different tenant fails
insert into tenant_records (id, tenant_id, data)
with context current_tenant_id = 'tenant_abc'
values (2, 'tenant_xyz', 'Data'); -- Fails: tenant mismatchAudit Trail with Actor Tracking:
-- Audit log with actor identity
create table audit_log (
id integer primary key,
action text,
user_id text default actor_id,
timestamp text default datetime('now')
) using memory
with context (
actor_id text
);
-- Log action with actor identity
insert into audit_log (id, action)
with context actor_id = 'user123'
values (1, 'DELETE_RECORD');Permission Verification:
-- Prevent unauthorized modifications
create table user_profiles (
user_id integer primary key,
email text,
constraint update_auth check (
context.requester_id = old.user_id or context.is_admin = 1
)
) using memory
with context (
requester_id integer,
is_admin integer
);
-- User can update their own profile
update user_profiles
with context requester_id = 42, is_admin = 0
set email = 'newemail@example.com'
where user_id = 42; -- Passes: requester_id matchesBest Practices:
- Use mutation context for application-specific security and access control
- Implement signature verification, digest validation, and rights checking in constraints
- Store actor identity, timestamps, and cryptographic proofs in defaults
- Use qualified
context.varNamefor clarity when variable names might conflict - Mark optional context variables as NULL
- Combine with user-defined functions for custom verification logic
- Context is required when defaults or constraints reference context variables
Quereus supports arbitrary key-value metadata tags on schema objects via WITH TAGS. Tags are informational only -- the engine does not derive behavior from them. They do not affect schema hashing.
Syntax:
-- Table-level tags
create table Orders (
id integer primary key,
name text not null
) with tags (display_name = 'Customer Orders', audit = true);
-- Column-level tags
create table Products (
id integer primary key with tags (display_name = 'Product ID'),
name text not null with tags (searchable = true)
);
-- Constraint-level tags
create table Employees (
id integer primary key,
email text not null,
constraint uq_email unique (email) with tags (error_message = 'Email must be unique')
);
-- View and index tags
create view ActiveUsers as select * from Users where active = 1
with tags (cacheable = true);
create index idx_name on Products (name) with tags (purpose = 'search optimization');Tag values can be strings, numbers, booleans (true/false), or null. Tag keys are identifiers. TAGS is a contextual keyword and can still be used as a regular identifier. WITH TAGS can appear alongside WITH CONTEXT in any order.
Tags are available on the schema interfaces (TableSchema.tags, ColumnSchema.tags, etc.) and via the programmatic API (SchemaManager.getTableTags(), SchemaManager.setTableTags()).
Modifies an existing table's structure or name.
RENAME TABLE
ALTER TABLE old_name RENAME TO new_name;Renames a table. The old name becomes invalid immediately. Fails if the new name already exists.
RENAME COLUMN
ALTER TABLE table_name RENAME COLUMN old_col TO new_col;Renames a column. Data is preserved. Fails if the new name conflicts with an existing column or the old name doesn't exist.
ADD COLUMN
ALTER TABLE table_name ADD COLUMN col_name type [constraints];Adds a new column to the table. Existing rows are backfilled with the column's DEFAULT value (or NULL if no default). Restrictions:
- Cannot add a PRIMARY KEY column.
- Cannot add a NOT NULL column without a DEFAULT if the table has existing rows.
DROP COLUMN
ALTER TABLE table_name DROP COLUMN col_name;Removes a column from the table and all its data. Restrictions:
- Cannot drop a PRIMARY KEY column.
- Cannot drop the last remaining column.
ALTER PRIMARY KEY
ALTER TABLE table_name ALTER PRIMARY KEY (col_name [ASC|DESC] [, ...]);Replaces the table's primary key definition. All named columns must have a NOT NULL constraint. The empty-PK case ALTER PRIMARY KEY () is permitted (the table reverts to an implicit rowid-style key). Modules that support re-keying in place handle the change directly; modules that cannot (including the built-in MemoryTable) use an automatic rebuild fallback that copies all rows into a new table with the updated PK and swaps it in place.
The from clause specifies the data sources for a query.
Syntax:
from table_reference [, table_reference...]
table_reference:
table_name [as alias]
| function_name ([arg[,...]]) [as alias]
| (select_statement) as alias
| (mutating_statement) as alias
| table_reference join_type join table_reference [join_specification]Mutating Statements: Quereus supports relational orthogonality - any statement that results in a relation can be used anywhere that expects a relation value. This includes:
(INSERT ... RETURNING ...) AS alias(UPDATE ... RETURNING ...) AS alias(DELETE ... RETURNING ...) AS alias
This allows for powerful compositions where the results of data modifications can be immediately used in queries.
Join Types:
[inner] join: Matches rows when join condition is trueleft [outer] join: Includes all rows from left table, plus matching rows from right tableright [outer] join: Includes all rows from right table, plus matching rows from left tablefull [outer] join: Includes all rows from both tablescross join: Cartesian product of both tables
Join Specifications:
on condition: Join conditionusing (column[,...]): Join on equal named columns
Examples:
-- Multiple tables
select u.name, p.title
from users as u, posts as p
where u.id = p.user_id;
-- Inner join
select e.name, d.name as department
from employees as e
inner join departments as d on e.dept_id = d.id;
-- Left join
select c.name, o.order_date
from customers as c
left join orders as o on c.id = o.customer_id;
-- Using clause
select p.title, c.content
from posts as p
join comments as c using (post_id);
-- Multiple joins
select o.id, c.name, p.name as product
from orders as o
join customers as c on o.customer_id = c.id
join order_items as oi on o.id = oi.order_id
join products as p on oi.product_id = p.id;
-- Subquery in from
select avg_dept.department, avg_dept.avg_salary
from (
select department, avg(salary) as avg_salary
from employees
group by department
) as avg_dept
where avg_dept.avg_salary > 50000;
-- Mutating subquery: INSERT with RETURNING as table source
select new_user.id, new_user.name, 'created' as status
from (
insert into users (name, email)
values ('Alice', 'alice@example.com')
returning id, name
) as new_user;
-- Mutating subquery: UPDATE with RETURNING in JOIN
select u.name, updated.old_email, updated.new_email
from users u
join (
update user_profiles
set email = lower(email)
where email != lower(email)
returning user_id, email as old_email, lower(email) as new_email
) as updated on u.id = updated.user_id;
-- Mutating subquery: DELETE with RETURNING for audit trail
insert into audit_log (action, deleted_user_id, deleted_name)
select 'user_deleted', deleted.id, deleted.name
from (
delete from users
where last_login < date('now', '-365 days')
returning id, name
) as deleted;
-- Table-valued function
select key, value
from json_each('{"name":"John","age":30}');The where clause filters rows returned by a query.
Syntax:
where conditionThe condition is an expression that evaluates to a boolean result. If true, the row is included in the result set.
Examples:
-- Simple comparison
select * from products where price < 50;
-- Multiple conditions with AND/OR
select * from employees
where (department = 'Sales' or department = 'Marketing')
and hire_date >= date('2020-01-01');
-- Pattern matching with LIKE
select * from customers where email like '%@gmail.com';
-- Range check with BETWEEN
select * from orders where order_date between date('now', '-30 days') and date('now');
-- NULL checking
select * from users where last_login is null;
-- Subquery in WHERE
select * from products
where category_id in (select id from categories where parent_id = 5);
-- EXISTS subquery
select * from customers as c
where exists (
select 1 from orders as o
where o.customer_id = c.id and o.status = 'shipped'
);The group by clause groups rows that have the same values into summary rows.
Syntax:
group by expression [, expression...]Behavior:
- Each expression in the group by must be a column name, an expression, or a positive integer representing a position in the select list.
- Aggregate functions (
count(),sum(), etc.) can be used with group by to calculate summary statistics for each group. - Columns in the select list that are not aggregated must appear in the group by clause.
Examples:
-- Simple grouping
select department, count(*) as employee_count
from employees
group by department;
-- Multiple grouping expressions
select department, job_title, avg(salary) as avg_salary
from employees
group by department, job_title;
-- Grouping with expression
select
substr(email, instr(email, '@') + 1) as domain,
count(*) as user_count
from users
group by domain;
-- Grouping with DATE function
select
strftime('%Y-%m', order_date) as month,
sum(total) as monthly_sales
from orders
group by month
order by month;The having clause filters groups based on a condition.
Syntax:
having conditionThe condition is applied after grouping, allowing filtering on aggregate values.
Examples:
-- Filter groups with HAVING
select department, count(*) as employee_count
from employees
group by department
having employee_count > 10;
-- HAVING with aggregate function
select product_id, sum(quantity) as total_sold
from order_items
group by product_id
having total_sold > 100
order by total_sold desc;
-- HAVING with multiple conditions
select category, avg(price) as avg_price
from products
group by category
having avg_price > 50 and count(*) >= 5;The order by clause sorts the result set.
Syntax:
order by expression [asc | desc] [nulls first | nulls last]
[, expression [asc | desc] [nulls first | nulls last] ...]Options:
asc: Ascending order (default)desc: Descending ordernulls first: NULL values sort before non-NULL valuesnulls last: NULL values sort after non-NULL values- Expression can be a column name, alias, or expression
Examples:
-- Simple ordering
select * from products order by price;
-- Multiple sort keys
select * from employees
order by department asc, salary desc;
-- Ordering by expression
select name, price, quantity, price * quantity as total
from order_items
order by total desc;
-- Ordering with NULLS FIRST/LAST
select * from users
order by last_login desc nulls last;The limit and offset clauses restrict the number of rows returned.
Syntax:
limit count [offset skip]
-- or
limit skip, countOptions:
count: Maximum number of rows to returnskip: Number of rows to skip before returning rows
Examples:
-- Simple LIMIT
select * from products order by price limit 10;
-- LIMIT with OFFSET
select * from products order by price limit 10 offset 20;
-- Alternative syntax
select * from products order by price limit 20, 10;
-- Pagination example
select id, title, created_at
from posts
order by created_at desc
limit 20 offset (3 - 1) * 20; -- Page 3, 20 items per pageThe WITH clause allows you to define temporary named result sets called Common Table Expressions (CTEs) that can be referenced within the main query. CTEs are particularly useful for creating readable, modular queries and implementing recursive operations.
Syntax:
with [recursive] cte_name [(column1, column2, ...)] as (
select_statement
) [, cte_name2 as (...)]
select ... from cte_name ...Options:
recursive: Enables recursive processing for the CTEcolumn_name_list: Optional explicit column names for the CTEmaterialized/not materialized: Hints for optimization - by default, results are cached if accessed more than once
Basic CTEs create temporary named views that exist only for the duration of the query:
Examples:
-- Simple CTE for code organization
with active_users as (
select id, name, email
from users
where status = 'active' and last_login > date('now', '-30 days')
)
select name, email
from active_users
where email like '%@company.com'
order by name;
-- Multiple CTEs
with
high_value_customers as (
select customer_id, sum(total) as lifetime_value
from orders
group by customer_id
having lifetime_value > 1000
),
recent_orders as (
select customer_id, order_id, total, order_date
from orders
where order_date > date('now', '-90 days')
)
select c.name, hvc.lifetime_value, ro.total, ro.order_date
from high_value_customers hvc
join customers c on hvc.customer_id = c.id
join recent_orders ro on hvc.customer_id = ro.customer_id
order by hvc.lifetime_value desc;
-- CTE with explicit column names
with sales_summary(dept, total_sales, avg_sale) as (
select department, sum(amount), avg(amount)
from sales
group by department
)
select * from sales_summary where avg_sale > 500;Recursive CTEs enable hierarchical and iterative processing by allowing a CTE to reference itself. They follow the SQL:1999 standard semantics as defined in ISO/IEC 9075-2:2016, Section 7.14.
Structure: A recursive CTE must have the form:
with recursive cte_name as (
base_case_query -- Initial/seed query (non-recursive)
union [all]
recursive_case_query -- Query that references cte_name (recursive part)
)Quereus automatically defers certain row-level CHECK constraints to COMMIT time to spare users from managing DEFERRABLE/SET CONSTRAINTS.
- Immediate: Constraints that only reference the current row (including
OLD/NEWreferences) are validated during the DML statement. - Auto-deferred: Constraints that reference other relations (e.g., contain subqueries) are validated at COMMIT using the same delta engine as global assertions. If any violation is found, COMMIT fails and the transaction is rolled back.
Example:
create table inventory (
loc text,
sku text,
qty integer check (qty >= 0),
constraint enough_stock check (
new.qty <= (select sum(s.qty) from inventory s where s.sku = new.sku)
)
);qty >= 0 is checked immediately; enough_stock is auto-deferred and validated at COMMIT.
Standard Semantics (ISO SQL-2016 §7.14): According to the SQL standard, recursive CTE evaluation follows this algorithm:
- W₀ := result of base_case_query
- R := ∅ (empty result)
- repeat:
- R := R ∪ W₀ (accumulate final result)
- W₁ := result of recursive_case_query applied to W₀
- W₀ := W₁ ∖ R (working table = new rows only, for union)
- W₀ := W₁ (working table = all new rows, for union all)
- until W₀ = ∅
- return R
Key Points:
- The recursive term sees only the working table (rows from the previous iteration)
- It does not see the entire accumulated result
- This enables efficient semi-naïve evaluation with O(N) complexity instead of O(N²)
Implementation in Quereus: Quereus implements the semi-naïve (delta) evaluation algorithm:
- Maintains separate
allRows(accumulated result) anddelta(previous iteration result) - Each iteration feeds only
deltato the recursive term - For
union: deduplicates new rows againstallRows - For
union all: appends all new rows directly - Complexity is O(N) rather than the naive O(N²) approach
Examples:
-- Simple counter (classic recursive CTE example)
with recursive counter(n) as (
select 1 -- Base case: start with 1
union all
select n + 1 -- Recursive case: increment by 1
from counter
where n < 5 -- Termination condition
)
select n from counter order by n;
-- Result: 1, 2, 3, 4, 5
-- Hierarchical organization chart
with recursive org_chart(employee_id, name, manager_id, level, path) as (
-- Base case: top-level managers (no manager)
select id, name, manager_id, 0, name
from employees
where manager_id is null
union all
-- Recursive case: find direct reports
select e.id, e.name, e.manager_id, oc.level + 1, oc.path || ' -> ' || e.name
from employees e
join org_chart oc on e.manager_id = oc.employee_id
where oc.level < 10 -- Prevent infinite recursion
)
select * from org_chart order by level, name;
-- Tree traversal with path tracking
with recursive tree_path(node_id, parent_id, path, depth) as (
-- Base case: root nodes
select id, parent_id, name, 0
from nodes
where parent_id is null
union all
-- Recursive case: children
select n.id, n.parent_id, tp.path || '/' || n.name, tp.depth + 1
from nodes n
join tree_path tp on n.parent_id = tp.node_id
where tp.depth < 20 -- Maximum depth limit
)
select node_id, path, depth from tree_path order by path;
-- Graph traversal (finding all paths)
with recursive paths(start_node, end_node, path, visited) as (
-- Base case: all edges as single-step paths
select from_node, to_node, from_node || '->' || to_node, from_node || ',' || to_node
from edges
union
-- Recursive case: extend paths
select p.start_node, e.to_node, p.path || '->' || e.to_node, p.visited || ',' || e.to_node
from paths p
join edges e on p.end_node = e.from_node
where p.visited not like '%,' || e.to_node || ',%' -- Avoid cycles
and length(p.path) < 100 -- Prevent runaway recursion
)
select distinct start_node, end_node, path
from paths
order by start_node, end_node, length(path);Materialization Hints: While parsed, materialization hints are not currently enforced but may influence future optimizations:
with recursive
large_cte as materialized (select ...),
small_cte as not materialized (select ...)
select ...Recursion Limits:
Use the option clause to control maximum recursion depth:
with recursive counter(n) as (
select 1
union all
select n + 1 from counter where n < 1000000
)
option (maxrecursion 10000) -- Limit to 10,000 iterations
select count(*) from counter;Performance Characteristics:
- Non-recursive CTEs: Executed once, results may be cached
- Recursive CTEs: Semi-naïve evaluation with O(N) complexity
- Memory usage: Working table and result set kept in memory
- Deduplication: For
union, uses B-Tree with proper SQL value comparison
Hierarchical Data:
-- Employee reporting hierarchy
with recursive reporting_chain as (
select employee_id, manager_id, 1 as level
from employees where employee_id = ? -- Specific employee
union all
select e.employee_id, e.manager_id, rc.level + 1
from employees e
join reporting_chain rc on e.employee_id = rc.manager_id
)
select * from reporting_chain;Series Generation:
-- Generate date series
with recursive date_series(dt) as (
select date('2024-01-01') -- Start date
union all
select date(dt, '+1 day')
from date_series
where dt < '2024-12-31' -- End date
)
select dt, strftime('%w', dt) as day_of_week from date_series;Tree Operations:
-- Calculate subtree sizes
with recursive subtree_sizes(node_id, size) as (
-- Leaf nodes
select id, 1 from nodes where id not in (select distinct parent_id from nodes where parent_id is not null)
union all
-- Internal nodes
select n.id, 1 + sum(ss.size)
from nodes n
join subtree_sizes ss on n.id = ss.parent_id
group by n.id
)
select node_id, size from subtree_sizes;Safety Considerations:
- Always include termination conditions to prevent infinite recursion
- Use depth/iteration limits as safeguards
- Consider cycle detection for graph traversal
- Monitor memory usage for large result sets
Numeric Literals:
- Integers:
123,-456 - Floating-point:
123.45,-67.89,1.23e4 - Boolean: Represented as integers:
0(false),1(true)
String Literals:
- Single-quoted:
'Text value' - Double-quoted identifiers:
"Column name with spaces"
Blob Literals:
- Hex format:
x'53514C697465'(SQLite)
NULL:
- Represents missing or unknown value:
null
Examples:
select 42 as answer;
select 'Hello, world!' as greeting;
select x'DEADBEEF' as binary_data;
select null as no_value;Arithmetic Operators:
- Addition:
+ - Subtraction:
- - Multiplication:
* - Division:
/ - Modulo (remainder):
%
Comparison Operators:
- Equal:
=or== - Not equal:
!=or<> - Less than:
< - Greater than:
> - Less than or equal:
<= - Greater than or equal:
>=
Logical Operators:
- AND:
and - OR:
or - XOR:
xor - NOT:
not
Bitwise Operators:
- AND:
& - OR:
| - NOT:
~ - Left shift:
<< - Right shift:
>>
String Operators:
- Concatenation:
||
JSON Path Operators:
->: Extract JSON value at path, returns JSON (syntactic sugar forjson_extract())->>: Extract JSON value at path, returns scalar TEXT (syntactic sugar forcast(json_extract() as text))
Path shorthand: expr -> 'name' is equivalent to expr -> '$.name'; expr -> 0 is equivalent to expr -> '$[0]'.
Other Operators:
is: Tests if values are identical (including NULL)is not: Tests if values are not identicalin: Tests if a value is in a setnot in: Tests if a value is not in a setlike: Pattern matching with wildcardsglob: Pattern matching with Unix wildcardsbetween: Tests if a value is within a rangeexists: Tests if a subquery returns any rowscase: Conditional expression
Examples:
-- Arithmetic
select price, quantity, price * quantity as total from order_items;
-- String concatenation
select first_name || ' ' || last_name as full_name from users;
-- Comparison
select * from products where price > 100;
-- Logical operators
select * from employees
where (department = 'Sales' or department = 'Marketing')
and salary > 50000;
-- JSON path operators
select data -> 'name' from users; -- extract as JSON
select data ->> 'age' from users; -- extract as TEXT
select data -> 'address' -> 'city' from users; -- chained access
select data -> 0 from json_array_col; -- array index shorthand
-- IS NULL / IS NOT NULL
select * from users where profile_picture is null;
-- IN operator
select * from products
where category in ('Electronics', 'Computers', 'Accessories');
-- IN with subquery
select * from employees
where department_id in (
select id from departments where location = 'Headquarters'
);
-- IN with value list (optimized with BTree for fast lookups)
select * from orders
where status in ('pending', 'processing', 'shipped', 'delivered');
-- BETWEEN
select * from orders
where order_date between date('2023-01-01') and date('2023-12-31');
-- NOT BETWEEN
select * from products
where price not between 10.00 and 100.00;
-- LIKE pattern matching
select * from users where email like '%@gmail.com';
-- CASE expression
select
id,
name,
price,
case
when price < 10 then 'Budget'
when price < 50 then 'Regular'
when price < 100 then 'Premium'
else 'Luxury'
end as price_category
from products;
-- EXISTS
select * from customers as c
where exists (
select 1 from orders as o
where o.customer_id = c.id and o.total > 1000
);
-- NOT EXISTS
select * from customers as c
where not exists (
select 1 from orders as o
where o.customer_id = c.id
);
-- NOT IN with value list
select * from products
where category not in ('Discontinued', 'Seasonal', 'Clearance');
-- NOT IN with subquery
select * from employees
where department_id not in (
select id from departments where location = 'Remote'
);Function Calls:
function_name(argument1, argument2, ...)Subexpressions:
(expression)Subqueries:
- Scalar subquery: Returns a single value
- Row subquery: Returns a single row
- Table subquery: Returns a table result
- EXISTS subquery: Returns a boolean
Examples:
-- Scalar functions
select abs(-42), round(3.14159, 2), upper('hello');
-- Subexpressions for grouping
select (price + tax) * quantity as total from order_items;
-- Scalar subquery
select name, (select count(*) from orders where customer_id = c.id) as order_count
from customers as c;
-- Subquery with comparison
select * from products
where price > (select avg(price) from products);
-- Correlated subquery
select * from orders as o
where total > (
select avg(total) from orders
where customer_id = o.customer_id
);COLLATE Expression:
expr collate collation_nameCAST Expression:
cast(expr as type)Parameter References:
- Positional:
?,?1,?2, ... - Named:
:name,@name,$name
Examples:
-- COLLATE
select * from customers
order by name collate nocase;
-- CAST
select cast(price as integer) as rounded_price
from products;
-- Parameters
-- (usually used in prepared statements)
select * from users where id = ? and status = ?;
select * from products where category = :category and price <= :max_price;Quereus provides a rich set of built-in functions for data manipulation, calculation, and transformation. These functions follow SQL standards with some Quereus-specific extensions.
Scalar functions operate on single values and return a single value per row.
lower(X): Returns the lowercase version of string Xupper(X): Returns the uppercase version of string Xlength(X): Returns the length of string X in characterssubstr(X, Y[, Z]): Returns a substring of X starting at position Y (1-based) and Z characters longsubstring(X, Y[, Z]): Alias forsubstr()trim(X[, Y]): Removes leading and trailing characters Y from Xltrim(X[, Y]): Removes leading characters Y from Xrtrim(X[, Y]): Removes trailing characters Y from Xreplace(X, Y, Z): Replaces all occurrences of Y in X with Zinstr(X, Y): Returns the 1-based position of the first occurrence of Y in Xlpad(X, Y[, Z]): Left-pads string X to length Y with string Z (default space)rpad(X, Y[, Z]): Right-pads string X to length Y with string Z (default space)reverse(X): Returns string X with characters in reverse orderlike(X, Y): Returns 1 if X matches pattern Y, 0 otherwiseglob(X, Y): Returns 1 if X matches glob pattern Y, 0 otherwise
Examples:
-- String manipulation
select
lower('HELLO') as lowercase,
upper('world') as uppercase,
length('Quereus') as str_length,
substr('abcdef', 2, 3) as substring,
trim(' test ') as trimmed,
replace('hello world', 'world', 'Quereus') as replaced;
-- Result:
-- lowercase | uppercase | str_length | substring | trimmed | replaced
-- 'hello' | 'WORLD' | 7 | 'bcd' | 'test' | 'hello Quereus'abs(X): Returns the absolute value of Xround(X[, Y]): Rounds X to Y decimal placesceil(X),ceiling(X): Returns the smallest integer not less than Xfloor(X): Returns the largest integer not greater than Xpow(X, Y),power(X, Y): Returns X raised to the power of Ysqrt(X): Returns the square root of Xclamp(X, min, max): Constrains X to be between min and maxgreatest(X, Y, ...): Returns the largest value from the argumentsleast(X, Y, ...): Returns the smallest value from the argumentsrandom(): Returns a random integerrandomblob(N): Returns a blob containing N bytes of pseudo-random data
Examples:
-- Numeric calculations
select
abs(-42) as absolute,
round(3.14159, 2) as rounded,
ceil(9.1) as ceiling,
floor(9.9) as floor_val,
pow(2, 8) as power_val,
sqrt(144) as square_root,
random() % 100 as random_num;
-- Result example:
-- absolute | rounded | ceiling | floor_val | power_val | square_root | random_num
-- 42 | 3.14 | 10 | 9 | 256 | 12 | 73coalesce(X, Y, ...): Returns the first non-NULL valuenullif(X, Y): Returns NULL if X equals Y, otherwise returns Xiif(X, Y, Z): If X is true, returns Y, otherwise returns Zchoose(index, val0, val1, ...): Returns the value at the given index (0-based)
Examples:
-- Conditional logic
select
coalesce(null, null, 'third', 'fourth') as first_non_null,
nullif(5, 5) as same_values,
nullif(10, 20) as different_values,
iif(age >= 18, 'adult', 'minor') as age_category
from users;
-- Result example:
-- first_non_null | same_values | different_values | age_category
-- 'third' | null | 10 | 'adult'typeof(X): Returns the datatype of X as a string ('null', 'integer', 'real', 'text', or 'blob')
Aggregate functions perform a calculation on a set of values and return a single value.
count(X): Returns the number of non-NULL values of Xcount(*): Returns the number of rowssum(X): Returns the sum of all non-NULL values of Xavg(X): Returns the average of all non-NULL values of Xmin(X): Returns the minimum value of all non-NULL values of Xmax(X): Returns the maximum value of all non-NULL values of Xgroup_concat(X[, Y]): Returns a string concatenating non-NULL values of X, separated by Y (default ',')total(X): Returns the sum as a floating-point value (returns 0.0 for empty set)var_pop(X): Returns the population variancevar_samp(X): Returns the sample variancestddev_pop(X): Returns the population standard deviationstddev_samp(X): Returns the sample standard deviationstring_concat(X): Concatenates string values with comma separator
Examples:
-- Basic aggregates
select
count(*) as total_rows,
count(email) as users_with_email,
sum(cost) as total_cost,
avg(age) as average_age,
min(created_at) as earliest_record,
max(score) as highest_score
from users;
-- Grouping with aggregates
select
department,
count(*) as employee_count,
avg(salary) as avg_salary,
min(hire_date) as earliest_hire,
group_concat(name, ', ') as employee_names
from employees
group by department;Quereus provides comprehensive functions for working with JSON data.
JSON Query Functions:
json_extract(json, path, ...): Extracts values from JSON using JSONPathjson_type(json[, path]): Returns the type of JSON value ('object', 'array', 'string', 'number', 'boolean', 'null')json_valid(json): Checks if a string is valid JSON (returns 1 or 0)json_schema(json, schema_def): Validates JSON against a structural schema (returns 1 or 0)json_array_length(json[, path]): Returns the length of a JSON array
JSON Construction Functions:
json_object(key, value, ...): Creates a JSON object from key-value pairsjson_array(value, ...): Creates a JSON array from valuesjson_quote(value): Converts a SQL value to a JSON-quoted string
JSON Modification Functions:
json_insert(json, path, value, ...): Inserts values into JSON (does not overwrite existing)json_replace(json, path, value, ...): Replaces existing values in JSONjson_set(json, path, value, ...): Sets values in JSON (inserts or replaces)json_remove(json, path, ...): Removes values from JSONjson_patch(json, patch): Applies a JSON Patch (RFC 6902) to a JSON document
JSON Aggregate Functions:
json_group_array(X): Aggregate function that creates a JSON array from valuesjson_group_object(key, value): Aggregate function that creates a JSON object from key-value pairs
JSON Table-Valued Functions:
json_each(json[, path]): Returns one row per element in a JSON array or objectjson_tree(json[, path]): Returns one row per node in the JSON tree structure
Examples:
-- JSON extraction
select
json_extract('{"name":"John","age":30}', '$.name') as name,
json_extract('{"name":"John","age":30}', '$.age') as age;
-- JSON creation
select
json_object('name', 'Alice', 'age', 25) as person,
json_array(1, 2, 3, 4, 5) as numbers;
-- JSON schema validation (using TypeScript-like syntax)
select json_schema('[1, 2, 3]', 'number[]'); -- Returns 1 (valid)
select json_schema('{"x": 42}', '{ x: number }'); -- Returns 1 (valid)
select json_schema('[{"x": 1}, {"x": 2}]', '{ x: number }[]'); -- Returns 1 (valid)
-- Enforcing JSON structure with CHECK constraints
create table api_events (
id integer primary key,
event_type text not null,
payload json check (json_schema(payload, '{ timestamp: string, data: any }'))
);
-- Aggregating to JSON
select
department,
json_group_array(name) as employees,
json_group_object(id, salary) as salary_map
from employees
group by department;Quereus includes functions for manipulating dates and times.
Temporal Type Functions:
date(timestring[, modifier...]): Returns the date as 'YYYY-MM-DD'time(timestring[, modifier...]): Returns the time as 'HH:MM:SS'datetime(timestring[, modifier...]): Returns the date and time as 'YYYY-MM-DD HH:MM:SS'timespan(duration_string): Returns a TIMESPAN from ISO 8601 duration or human-readable stringjulianday(timestring[, modifier...]): Returns the Julian day numberstrftime(format, timestring[, modifier...]): Returns a formatted date stringis_iso_date(X): Returns 1 if X is a valid ISO 8601 date, 0 otherwiseis_iso_datetime(X): Returns 1 if X is a valid ISO 8601 datetime, 0 otherwise
Common modifiers:
+N days,+N hours,+N minutes,+N seconds,+N months,+N yearsstart of month,start of year,start of dayweekday N(0=Sunday, 1=Monday, etc.)localtime,utc
Examples:
-- Date functions
select
date('now') as today,
time('now', 'localtime') as current_time,
datetime('now', '+1 day') as tomorrow,
julianday('2023-01-01') - julianday('2022-01-01') as days_difference,
strftime('%Y-%m-%d %H:%M', 'now') as formatted_now,
strftime('%W', 'now') as week_of_year;
-- Date calculations
select
date('now', '+7 days') as one_week_later,
date('now', 'start of month', '+1 month', '-1 day') as last_day_of_month,
datetime('now', 'weekday 1') as next_or_current_monday;
-- Timespan creation and arithmetic
select
timespan('1 hour 30 minutes') as duration1,
timespan('PT2H30M') as duration2,
datetime('2024-01-15T09:00:00') + timespan('2 hours') as meeting_end,
date('2024-01-15') + timespan('7 days') as next_week,
timespan('3 hours') - timespan('45 minutes') as remaining;
-- Timespan comparisons
select * from events
where duration > timespan('1 hour')
order by duration desc;Window functions perform calculations across a set of table rows related to the current row. Quereus provides comprehensive window function support with an extensible architecture.
Window Function Syntax:
window_function([arguments]) over (
[partition by partition_expression [, ...]]
[order by sort_expression [asc | desc] [, ...]]
[window_frame_clause]
)Available Window Functions:
Ranking Functions:
row_number(): Returns a sequential row number within the partitionrank(): Returns the rank with gaps (e.g., 1, 1, 3, 4)dense_rank(): Returns the rank without gaps (e.g., 1, 1, 2, 3)ntile(n): Distributes rows into n buckets
Aggregate Window Functions:
count(*),count(expr): Count of rows or non-NULL valuessum(expr): Sum of values in the window frameavg(expr): Average of values in the window framemin(expr),max(expr): Minimum/maximum values in the window frame
Navigation Functions (Planned):
lead(expr[, offset[, default]]): Accesses data from subsequent rowslag(expr[, offset[, default]]): Accesses data from previous rowsfirst_value(expr): Returns the first value in the window framelast_value(expr): Returns the last value in the window frame
Architecture Features:
- Extensible Registration: Window functions are registered like scalar/aggregate functions
- Performance Optimized: Groups functions by window specifications for efficiency
- Streaming Execution: Non-partitioned functions use constant memory
- Partitioned Execution: PARTITION BY properly collects and processes partitions
Examples:
-- Ranking employees by salary within departments
select
name,
department,
salary,
row_number() over (partition by department order by salary desc) as dept_rank,
rank() over (order by salary desc) as overall_rank,
dense_rank() over (partition by department order by salary desc) as dense_dept_rank
from employees;
-- Running totals and departmental statistics
select
name,
department,
salary,
sum(salary) over (partition by department order by hire_date) as running_dept_total,
avg(salary) over (partition by department) as dept_average,
count(*) over (partition by department) as dept_size
from employees;
-- Quartile analysis
select
name,
salary,
ntile(4) over (order by salary) as salary_quartile
from employees;
-- Multiple window functions with same specification (optimized)
select
product_id,
sales_date,
amount,
sum(amount) over w as running_total,
avg(amount) over w as running_average,
count(*) over w as running_count
from sales
window w as (partition by product_id order by sales_date);Table-valued functions return a result set that can be queried like a table.
Generation Functions:
generate_series(start, stop[, step]): Generates a series of integer values from start to stopsplit_string(str, delimiter): Splits a string into rows based on a delimiter
Schema Introspection Functions:
schema(): Returns information about all tables, views, and functions across all schemas (columns:schema,type,name,tbl_name,sql)table_info(table_name): Returns column information for a specific tablefunction_info([function_name]): Returns information about all registered functions, or a given registered function
Debugging and Analysis Functions:
query_plan(sql): Returns the query execution plan for a SQL statementscheduler_program(sql): Returns the scheduler program for a SQL statementstack_trace(sql): Returns the execution stack traceexecution_trace(sql): Returns detailed execution trace informationrow_trace(sql): Returns row-level trace informationexplain_assertion(assertion_name): Returns information about a specific assertion
Examples:
-- Generate a series of numbers
select value from generate_series(1, 10);
-- Get schema information
select schema, type, name, sql from schema() where type = 'table';
-- Get column information for a table
select cid, name, type, notnull, pk from table_info('users');
-- Get function information
select name, num_args, type, deterministic from function_info();
-- Analyze query plan
select * from query_plan('select * from users where id = 1');Virtual tables are Quereus's primary mechanism for accessing and manipulating data. They provide a table interface to various data sources through specialized modules.
Syntax:
create [temp | temporary] table [if not exists] table_name [(column_def[, ...])]
using module_name [(module_arguments...)]Examples:
-- Memory table with schema definition
create table users (
id integer primary key,
name text not null,
email text unique,
created_at text default (datetime('now'))
) using memory;
-- JSON table using the json_tree function
create table product_data
using json_tree('{"products":[{"id":1,"name":"Keyboard"},{"id":2,"name":"Mouse"}]}');
-- Create a memory table from a schema string
create table cache
using memory('create table x(key text primary key, value blob, expires integer)');Quereus comes with several built-in virtual table modules:
The memory module provides an in-memory, B+Tree-based storage with support for transactions, indices, and constraints.
Key features:
- Efficient in-memory storage
- Primary key and unique constraints
- Secondary index support via
create index - Transaction and savepoint support
Examples:
-- Create a memory table
create table products (
id integer primary key,
name text not null,
price real check (price >= 0),
category text
) using memory;
-- Create a secondary index
create index idx_products_category on products(category);
-- Insert data
insert into products (name, price, category)
values
('Laptop', 999.99, 'Electronics'),
('Desk Chair', 199.99, 'Furniture');
-- Query with index
select * from products where category = 'Electronics';Quereus provides two modules for working with JSON data:
json_each: Expands a JSON array into rows
-- Create table from JSON array
create table users using json_each('[
{"id":1,"name":"Alice","role":"admin"},
{"id":2,"name":"Bob","role":"user"}
]');
-- Query expanded JSON
select key, value from users where key = 'name';
-- Result: 'name', 'Alice' and 'name', 'Bob'json_tree: Expands a JSON structure recursively
-- Create and query a json_tree table
with json_data as (
select '{"users":[{"id":1,"name":"Alice"},{"id":2,"name":"Bob"}]}' as json
)
select key, value, fullkey, path
from json_tree(
(select json from json_data)
)
where path like '$.users[%].name';
-- Results in rows with users' namesThe _schema module provides access to schema information:
-- Query schema information
select * from _schema;
-- Returns information about tables, indexes, and viewsVirtual tables that support indexing (like the memory module) can have indexes created using standard SQL syntax.
Syntax:
create [unique] index [if not exists] index_name
on table_name (indexed_column[, ...])Examples:
-- Simple index on a single column
create index idx_users_email on users(email);
-- Composite index on multiple columns
create index idx_orders_customer_date on orders(customer_id, order_date);
-- Unique index
create unique index idx_products_sku on products(sku);The primary key constraint uniquely identifies each record in a table.
Syntax - Column Constraint:
column_name data_type primary key [asc|desc] [conflict_clause] [autoincrement]Syntax - Table Constraint:
primary key (column[, ...]) [conflict_clause]Examples:
-- Single-column primary key
create table users (
id integer primary key autoincrement,
username text not null
);
-- Composite primary key (table constraint)
create table order_items (
order_id integer,
product_id integer,
quantity integer not null,
primary key (order_id, product_id)
);
-- Primary key with descending order
create table logs (
timestamp integer primary key desc,
event text not null
);The not null constraint ensures that a column cannot have a NULL value.
Syntax:
column_name data_type not null [conflict_clause]Example:
create table contacts (
id integer primary key,
first_name text not null,
last_name text not null,
email text not null,
phone text
);The unique constraint ensures that all values in a column are different.
Syntax - Column Constraint:
column_name data_type unique [conflict_clause]Syntax - Table Constraint:
unique (column[, ...]) [conflict_clause]Examples:
-- Single-column unique constraint
create table users (
id integer primary key,
email text unique,
username text unique
);
-- Multi-column unique constraint
create table bookings (
id integer primary key,
room_id integer,
date text,
unique (room_id, date)
);The check constraint ensures that values in a column satisfy a specific condition.
Syntax - Column Constraint:
column_name data_type check [on operation_list] (expression)Syntax - Table Constraint:
check [on operation_list] (expression)The optional on operation_list specifies when the constraint should be checked (insert, update, delete).
Examples:
-- Column-level check constraint
create table products (
id integer primary key,
name text not null,
price real check (price > 0),
discount real check (discount >= 0 and discount <= 1)
);
-- Table-level check constraint
create table transfers (
id integer primary key,
source_account_id integer not null,
dest_account_id integer not null,
amount real not null check (amount > 0),
check (source_account_id != dest_account_id)
);
-- Operation-specific check constraint
create table audit_log (
id integer primary key,
record_id integer not null,
action text not null,
timestamp text not null,
check on insert (action in ('insert', 'update', 'delete'))
);
-- JSON structure validation with check constraint
create table events (
id integer primary key,
event_type text not null,
data json check (json_schema(data, '[{x:integer,y:number}]'))
);
-- Complex JSON schema validation
create table api_logs (
id integer primary key,
endpoint text not null,
request json check (json_schema(request, '{ method: string, headers: any, body: any }')),
response json check (json_schema(response, '{ status: number, body: any }'))
);The default constraint provides a default value for a column when no value is specified.
Syntax:
column_name data_type default valueExamples:
-- Constant default value
create table posts (
id integer primary key,
title text not null,
content text,
views integer default 0,
status text default 'draft'
);
-- Function-based default
create table audit_records (
id integer primary key,
action text not null,
timestamp text default (datetime('now'))
);The foreign key constraint links tables together and ensures referential integrity.
Foreign key enforcement is controlled by the foreign_keys pragma (default: on):
pragma foreign_keys = on; -- enable FK enforcement (default)
pragma foreign_keys = off; -- parse but don't enforceWhen no ON DELETE or ON UPDATE clause is specified, the default action is IGNORE (no enforcement). This means FKs are only enforced when you explicitly specify an action like CASCADE, RESTRICT, SET NULL, or SET DEFAULT.
Syntax - Column Constraint:
column_name data_type references foreign_table [(column)] [ref_actions]Syntax - Table Constraint:
foreign key (column[, ...]) references foreign_table [(column[, ...])] [ref_actions]Reference Actions:
[on delete action] [on update action]Where action can be:
set null— set child FK columns to NULL when parent row is deleted/updatedset default— set child FK columns to their default valuescascade— delete/update child rows when parent row is deleted/updatedrestrict— immediately reject delete/update if child rows existno action/ignore(default) — no enforcement; the FK is informational only
Enforcement Semantics:
When pragma foreign_keys = on (the default):
- IGNORE / NO ACTION (default): No enforcement. The FK is stored in the schema for metadata/introspection but does not generate any constraint checks or cascading actions.
- Child-side (INSERT/UPDATE): For FKs with at least one non-ignore action, validates that referenced parent rows exist. These checks are deferred to commit time (they use cross-table subqueries). Uses MATCH SIMPLE semantics (SQL default): if any FK column is NULL, the constraint is satisfied without checking the parent table.
- Parent-side DELETE/UPDATE with RESTRICT: Immediately rejects the operation if child rows reference the parent row being modified.
- Parent-side DELETE/UPDATE with CASCADE: Automatically deletes or updates matching child rows.
- Parent-side DELETE/UPDATE with SET NULL: Sets child FK columns to NULL.
- Parent-side DELETE/UPDATE with SET DEFAULT: Sets child FK columns to their default values.
Cascade cycle detection prevents infinite recursion when cascading actions chain across multiple tables.
Examples:
-- Column-level foreign key (no action clause = informational only)
create table posts (
id integer primary key,
user_id integer references users(id),
title text not null
);
-- Table-level foreign key with explicit actions (enforced)
create table comments (
id integer primary key,
post_id integer,
user_id integer,
content text not null,
foreign key (post_id) references posts(id) on delete cascade,
foreign key (user_id) references users(id) on delete set null
);Indexes improve query performance for specific columns.
Syntax:
create [unique] index [if not exists] index_name
on table_name (column [asc|desc][, ...]) [where condition]Examples:
-- Simple index
create index idx_users_email on users(email);
-- Multi-column index
create index idx_posts_user_date on posts(user_id, created_at desc);
-- Partial index with WHERE clause
create index idx_active_users on users(last_login) where status = 'active';
-- Unique index
create unique index idx_products_sku on products(sku);Syntax:
drop index [if exists] index_nameExample:
drop index idx_users_email;Transactions group multiple operations into a single unit that either succeeds completely or fails completely.
Starts a new transaction.
Syntax:
begin [transaction]Examples:
-- Start a transaction
begin;
-- Start a transaction with explicit keyword
begin transaction;Saves all changes made during the current transaction.
Syntax:
commit [transaction]Example:
-- Commit the current transaction
commit;Discards all changes made during the current transaction.
Syntax:
rollback [transaction]Example:
-- Discard all changes in the current transaction
rollback;Savepoints allow partial transaction rollbacks.
Create a savepoint:
savepoint savepoint_nameRollback to a savepoint:
rollback [transaction] to [savepoint] savepoint_nameRelease a savepoint:
release [savepoint] savepoint_nameExample:
-- Transaction with savepoints
begin;
insert into users (name, email) values ('Alice', 'alice@example.com');
savepoint after_alice;
insert into users (name, email) values ('Bob', 'bob@example.com');
-- Oops, we made a mistake with Bob
rollback to savepoint after_alice;
-- Only Alice is inserted, Bob's insert was rolled back
insert into users (name, email) values ('Charlie', 'charlie@example.com');
-- Release a savepoint (optional, mostly for cleanup)
release savepoint after_alice;
commit;- Explicit Transactions: Always use explicit transactions for multi-statement operations.
- Error Handling: Combine transactions with proper error handling to ensure rollback on failure.
- Transaction Size: Keep transactions as short as possible to reduce lock contention.
- Savepoints: Use savepoints for partial rollback instead of entire transaction rollback.
JavaScript Example with Quereus:
// Using explicit transactions in JavaScript
try {
await db.exec("begin");
const orderId = await db.get("insert into orders (customer_id, total) values (?, ?) returning (id)", [42, 129.99]);
await db.exec("insert into order_items (order_id, product_id, quantity) values (?, ?, ?)",
[orderId, 101, 2]);
await db.exec("insert into order_items (order_id, product_id, quantity) values (?, ?, ?)",
[orderId, 205, 1]);
await db.exec("commit");
console.log("Transaction committed successfully");
} catch (error) {
await db.exec("rollback");
console.error("Transaction failed:", error);
}PRAGMA statements are special commands that control the behavior of the Quereus database engine.
pragma name = value;
pragma name; -- query the current valueSets or queries the default virtual table module used when create table is called without a specific using clause.
-- Set default module to "memory"
pragma default_vtab_module = 'memory';
-- Query current default module
pragma default_vtab_module;Sets or queries the default arguments passed to the default virtual table module. The value should be a JSON array of strings.
-- Set default args for the default module
pragma default_vtab_args = '["create table x(id integer primary key, data text)"]';
-- Query current default args
pragma default_vtab_args;🚨 IMPORTANT: Departure from SQL Standard
Sets or queries the default nullability behavior for columns. This is a significant departure from the SQL standard, aligning with The Third Manifesto principles which advocate against NULL by default.
Values:
'not_null'(default): Columns are NOT NULL by default unless explicitly declared otherwise'nullable': Standard SQL behavior - columns are nullable by default unless explicitly declared NOT NULL
-- Set to Third Manifesto behavior (default in Quereus)
pragma default_column_nullability = 'not_null';
-- Set to SQL standard behavior
pragma default_column_nullability = 'nullable';
-- Query current setting
pragma default_column_nullability;Rationale: The Third Manifesto argues that NULL is fundamentally problematic in relational theory and that non-nullable types should be the default. Quereus follows this principle to avoid the "billion-dollar mistake" of NULL by default, while still allowing NULLs when explicitly needed.
Examples:
-- With default_column_nullability = 'not_null' (Quereus default)
create table users (
id integer primary key, -- Implicitly NOT NULL
name text, -- Implicitly NOT NULL
email text, -- Implicitly NOT NULL
bio text null -- Explicitly allows NULL
);
-- With default_column_nullability = 'nullable' (SQL standard)
create table users (
id integer primary key, -- Implicitly NOT NULL
name text, -- Allows NULL
email text not null, -- Explicitly NOT NULL
bio text -- Allows NULL
);Note: Primary key columns are always NOT NULL regardless of this setting.
Sets or queries the default schema search path used when resolving unqualified table names. The value is a comma-separated list of schema names.
Values:
- Comma-separated list of schema names (e.g.,
'main,extensions,plugins') - Empty string or
'main'to use only the default schema
-- Set search path for the connection
pragma schema_path = 'main,extensions,plugins';
-- Query current search path
pragma schema_path;
-- Returns: "main,extensions,plugins"
-- Reset to default
pragma schema_path = 'main';Resolution Order:
When resolving unqualified table names:
- Qualified names (
schema.table) are used exactly as specified WITH SCHEMAclause on the statement (highest priority)PRAGMA schema_pathsetting (session default)- Default schema (
main)
Examples:
-- Set search path for the session
pragma schema_path = 'workspace,main';
-- All subsequent queries use this path
select * from users; -- Searches workspace.users, then main.users
-- Override per-query with WITH SCHEMA
select * from users with schema main; -- Only searches main.users-- Configure default VTab settings
pragma default_vtab_module = 'memory';
pragma default_vtab_args = '[]';
-- Set Third Manifesto-aligned nullability (default)
pragma default_column_nullability = 'not_null';
-- Set schema search path
pragma schema_path = 'main,extensions';
-- Create a table using the default module and nullability
create table simple_cache (
key text primary key,
value text -- NOT NULL by default with 'not_null' setting
);
-- Equivalent to: create table simple_cache (...) using memory;
-- Tables will be resolved from main first, then extensions
select * from users; -- Searches main.users, then extensions.usersThese PRAGMAs are parsed but may not affect behavior in the same way as SQLite due to Quereus's virtual table-centric architecture.
pragma journal_mode = 'memory';
pragma synchronous = 'off';The ANALYZE command collects table statistics for cost-based query optimization. Statistics include row counts, per-column distinct value counts, null counts, min/max values, and equi-height histograms for selectivity estimation.
-- Analyze all tables in the default schema
analyze;
-- Analyze a specific table
analyze products;
-- Analyze a table in a specific schema
analyze main.products;ANALYZE returns one row per table with columns table (text) and rows (integer).
If a virtual table module implements getStatistics(), those statistics are used directly. Otherwise, a full table scan collects per-column statistics with reservoir-sampled histograms. Collected statistics are cached on the table schema and used by the optimizer's CatalogStatsProvider for improved cost estimates.
Quereus provides structured error handling through the QuereusError class hierarchy. Understanding these errors helps in debugging and creating robust applications.
The base error class for all Quereus errors. Contains:
message: Description of the errorcode: AStatusCodevalue indicating the error typecause: Optional underlying errorline,column: Position information when available
Specialized error for syntax problems during SQL parsing:
- Contains token information
- Includes precise position information
Indicates API misuse, such as:
- Operating on a closed database
- Invalid parameter binding
- Interface contract violations
Indicates constraint violations, such as:
- Unique constraint violations
- NOT NULL constraint violations
- CHECK constraint failures
Important status codes include:
ERROR: Generic errorINTERNAL: Internal logic errorCONSTRAINT: Constraint violationMISUSE: Library misuseRANGE: Parameter out of rangeNOTFOUND: Item not found
JavaScript Example:
try {
await db.exec("insert into users (email, username) values (?, ?)",
['user@example.com', 'newuser']);
console.log("Insert successful");
} catch (error) {
if (error.code === StatusCode.CONSTRAINT) {
console.error("Constraint violation:", error.message);
// Handle specific constraint error (e.g., duplicate email)
} else if (error instanceof ParseError) {
console.error("SQL syntax error at line", error.line, "column", error.column);
} else {
console.error("Database error:", error.message);
}
}-- Missing FROM clause (will cause ParseError)
select id, name where status = 'active';-- Assuming unique constraint on email (will cause ConstraintError)
insert into users (email) values ('existing@example.com');-- Reference to non-existent table (will cause QuereusError)
select * from nonexistent_table;-- Type mismatch in operation (may cause runtime error)
select 'text' + 42 from users;While Quereus supports similar SQL syntax, it has evolved into a distinct system with significant architectural and design differences from SQLite. Understanding these differences is important when porting applications from SQLite or creating new applications with Quereus.
- SQL syntax is largely compatible
- Core DML (select, insert, update, delete) support
- Transaction and savepoint support
- Similar built-in function set
- Parameter binding with
?,:name, and$name
Quereus:
- Modern logical/physical type separation
- Native temporal types (DATE, TIME, DATETIME) using Temporal API
- Native JSON type with deep equality comparison
- Conversion functions (
integer(),date(),json()) preferred over CAST - Plugin-extensible custom types
- See Type System Documentation
SQLite:
- Type affinity model (INTEGER, REAL, TEXT, BLOB, NUMERIC)
- Dates stored as TEXT, REAL, or INTEGER
- JSON support via JSON1 extension (text-based)
- CAST operator for type conversion
- Limited type extensibility
Quereus:
- All tables are virtual tables
- No built-in file storage
- In-memory focused with
memorymodule as primary storage - Async/await API design for JavaScript
SQLite:
- Physical disk-based tables with optional virtual tables
- Built around persistent file storage
- Synchronous C API
| Feature | Quereus | SQLite |
|---|---|---|
| Type System | Logical/physical separation, temporal types, JSON | Type affinity model |
| File Storage | No built-in support; VTab modules could implement | Primary feature |
| Virtual Tables | Central to design; all tables are virtual | Additional feature |
| Triggers | Not supported | Supported |
| Views | Basic support | Full support |
| Foreign Keys | Supported (on by default; requires explicit action clauses) | Full support (when enabled) |
| Window Functions | Phase 1 Complete (ranking, aggregates, partitioning) | Full support |
| Recursive CTEs | Basic support | Full support |
| JSON Functions | Extensive support with native JSON type | Available as extension |
| Indexes | Supported by some VTab modules | Full support |
| BLOB I/O | Basic support | Advanced support |
Quereus provides some syntax extensions:
-- Quereus: CREATE TABLE with USING clause for virtual tables
create table users (id integer primary key, name text) using memory;
-- Quereus: Temporal and JSON types
create table events (
id integer primary key,
event_date date,
event_time time,
created_at datetime,
metadata json
);
-- Quereus: Conversion functions instead of CAST
select integer('42'), date('2024-01-15'), json('{"x":1}');
-- Quereus: PRIMARY KEY with ASC/DESC qualifier
create table logs (timestamp integer primary key desc, event text);
-- Quereus: CHECK constraints with operation specificity
create table products (
price real check on insert (price >= 0),
stock integer check on update (stock >= 0)
);- Quereus: JavaScript-based with optimization for federated operations (handed off to modules)
- SQLite: C-based with focus on disk I/O efficiency
When migrating from SQLite to Quereus:
- Type System: Update date/time columns to use DATE, TIME, DATETIME types. Consider using JSON type for structured data. Replace CAST with conversion functions.
- Storage Strategy: Determine how to handle persistence (custom VTab, export/import, etc.)
- Async Handling: Convert synchronous SQLite code to async/await with Quereus
- Feature Check: Review use of triggers, advanced views, enforced foreign keys
- Transaction Model: Similar, but understand Quereus's virtual table transaction model
- Custom Functions: Port custom SQL functions to JavaScript
Quereus is actively developed with plans to add:
- Advanced window function features (navigation functions, window frames)
- Enhanced recursive CTE capabilities
- More query planning enhancements
- Additional virtual table modules
See [todo.md] for the current development plans.
Below is a formal Extended Backus-Naur Form (EBNF) grammar for Quereus's SQL dialect, based on the parser implementation.
[ a ]: Optional element a{ a }: Zero or more repetitions of aa | b: Either a or b( a ): Grouping"a": Literal terminal symbola b: Sequence: a followed by b
/* Top-level constructs */
sql_script = { sql_statement ";" } ;
sql_statement = [ with_clause ] ( select_stmt
| insert_stmt
| update_stmt
| delete_stmt
| values_stmt
| create_table_stmt
| create_index_stmt
| create_view_stmt
| create_assertion_stmt
| drop_stmt
| alter_table_stmt
| begin_stmt
| commit_stmt
| rollback_stmt
| savepoint_stmt
| release_stmt
| pragma_stmt
| analyze_stmt
| declare_schema_stmt
| diff_schema_stmt
| apply_schema_stmt
| explain_schema_stmt ) ;
/* WITH clause and CTEs */
with_clause = "with" [ "recursive" ] common_table_expr { "," common_table_expr } [ option_clause ] ;
option_clause = "option" "(" "maxrecursion" integer ")" ;
common_table_expr = cte_name [ "(" column_name { "," column_name } ")" ]
"as" [ "materialized" | "not" "materialized" ]
"(" ( select_stmt | insert_stmt | update_stmt | delete_stmt ) ")" ;
cte_name = identifier ;
/* SELECT statement */
select_stmt = simple_select [ compound_operator simple_select ]* [ order_by_clause ] [ limit_clause ] ;
simple_select = "select" [ distinct_clause ] result_column { "," result_column }
[ from_clause ]
[ where_clause ]
[ group_by_clause ]
[ having_clause ]
[ with_schema_clause ] ;
distinct_clause = "distinct" | "all" ;
result_column = "*" | table_name "." "*" | expr [ [ "as" ] column_alias ] ;
from_clause = "from" table_or_subquery { "," table_or_subquery } ;
table_or_subquery = table_name [ [ "as" ] table_alias ]
| "(" select_stmt ")" [ "as" ] table_alias
| "(" ( insert_stmt | update_stmt | delete_stmt ) ")" [ "as" ] table_alias
| function_name "(" [ expr { "," expr } ] ")" [ [ "as" ] table_alias ]
| join_clause ;
join_clause = table_or_subquery { join_operator table_or_subquery join_constraint } ;
join_operator = ","
| [ "left" [ "outer" ] | "inner" | "cross" | "right" [ "outer" ] | "full" [ "outer" ] ] "join" [ "lateral" ] ;
join_constraint = [ "on" expr | "using" "(" column_name { "," column_name } ")" ] ;
where_clause = "where" expr ;
group_by_clause = "group" "by" expr { "," expr } ;
having_clause = "having" expr ;
compound_operator = "union" [ "all" ] | "intersect" | "except" | "diff" ;
order_by_clause = "order" "by" ordering_term { "," ordering_term } ;
ordering_term = expr [ "asc" | "desc" ] [ "nulls" ( "first" | "last" ) ] ;
limit_clause = "limit" expr [ ( "offset" expr ) | ( "," expr ) ] ;
/* INSERT statement */
insert_stmt = "insert" [ "or" conflict_resolution ] "into" table_name
[ "(" column_name { "," column_name } ")" ]
( values_clause | select_stmt )
{ upsert_clause }
[ context_clause ]
[ with_schema_clause ]
[ returning_clause ] ;
conflict_resolution = "rollback" | "abort" | "fail" | "ignore" | "replace" ;
upsert_clause = "on" "conflict" [ "(" column_name { "," column_name } ")" ]
( "do" "nothing" | "do" "update" "set" column_name "=" expr
{ "," column_name "=" expr } [ where_clause ] ) ;
values_clause = "values" "(" expr { "," expr } ")" { "," "(" expr { "," expr } ")" } ;
values_stmt = values_clause [ order_by_clause ] [ limit_clause ] ;
/* UPDATE statement */
update_stmt = "update" table_name
"set" column_name "=" expr { "," column_name "=" expr }
[ where_clause ]
[ context_clause ]
[ with_schema_clause ]
[ returning_clause ] ;
/* DELETE statement */
delete_stmt = "delete" "from" table_name [ where_clause ]
[ context_clause ]
[ with_schema_clause ]
[ returning_clause ] ;
context_clause = "with" "context" context_assignment { "," context_assignment } ;
context_assignment = identifier "=" expr ;
with_schema_clause = "with" "schema" schema_name { "," schema_name } ;
returning_clause = "returning" [ qualifier "." ] "*" { "," [ qualifier "." ] "*" }
| "returning" [ qualifier "." ] expr [ [ "as" ] column_alias ]
{ "," [ qualifier "." ] expr [ [ "as" ] column_alias ] } ;
qualifier = "old" | "new" ;
/* CREATE TABLE statement */
create_table_stmt = "create" [ "temp" | "temporary" ] "table" [ "if" "not" "exists" ]
table_name "(" column_def { "," ( column_def | table_constraint ) } ")"
[ "using" module_name [ "(" module_arg { "," module_arg } ")" ] ]
{ context_def_clause | tags_clause } ;
context_def_clause = "with" "context" "(" context_var_def { "," context_var_def } ")" ;
context_var_def = identifier type_name [ "null" ] ;
tags_clause = "with" "tags" "(" tag_entry { "," tag_entry } ")" ;
tag_entry = identifier "=" tag_value ;
tag_value = string_literal | signed_number | "true" | "false" | "null" ;
column_def = column_name [ type_name ] { column_constraint } [ tags_clause ] ;
type_name = identifier [ "(" signed_number [ "," signed_number ] ")" ] ;
column_constraint = [ "constraint" name ]
( primary_key_clause
| "not" "null" [ conflict_clause ]
| "unique" [ conflict_clause ]
| "check" [ "on" row_op_list ] "(" expr ")"
| "default" ( signed_number | literal_value | "(" expr ")" )
| "collate" collation_name
| foreign_key_clause
| "generated" "always" "as" "(" expr ")" [ "stored" | "virtual" ] )
[ tags_clause ] ;
primary_key_clause = "primary" "key" [ ( "asc" | "desc" ) ] [ conflict_clause ] [ "autoincrement" ] ;
table_constraint = [ "constraint" name ]
( "primary" "key" "(" indexed_column { "," indexed_column } ")" [ conflict_clause ]
| "unique" "(" column_name { "," column_name } ")" [ conflict_clause ]
| "check" [ "on" row_op_list ] "(" expr ")"
| "foreign" "key" "(" column_name { "," column_name } ")" foreign_key_clause )
[ tags_clause ] ;
foreign_key_clause = "references" foreign_table [ "(" column_name { "," column_name } ")" ]
{ [ "on" ( "delete" | "update" ) ( "set" "null" | "set" "default" | "cascade" | "restrict" | "no" "action" ) ]
| [ "match" name ] }
[ [ "not" ] "deferrable" [ "initially" ( "deferred" | "immediate" ) ] ] ;
conflict_clause = "on" "conflict" conflict_resolution ;
row_op_list = row_op { "," row_op } ;
row_op = "insert" | "update" | "delete" ;
/* CREATE INDEX statement */
create_index_stmt = "create" [ "unique" ] "index" [ "if" "not" "exists" ]
index_name "on" table_name "(" indexed_column { "," indexed_column } ")"
[ "where" expr ] [ tags_clause ] ;
indexed_column = column_name [ "collate" collation_name ] [ "asc" | "desc" ] ;
/* CREATE VIEW statement */
create_view_stmt = "create" [ "temp" | "temporary" ] "view" [ "if" "not" "exists" ]
view_name [ "(" column_name { "," column_name } ")" ] "as" select_stmt
[ tags_clause ] ;
/* CREATE ASSERTION statement */
create_assertion_stmt = "create" "assertion" assertion_name "check" "(" expr ")" ;
/* DROP statement */
drop_stmt = "drop" ( "table" | "index" | "view" | "assertion" ) [ "if" "exists" ] name ;
/* ALTER TABLE statement */
alter_table_stmt = "alter" "table" table_name
( rename_table_stmt
| rename_column_stmt
| add_column_stmt
| drop_column_stmt
| add_constraint_stmt
| alter_pk_stmt ) ;
rename_table_stmt = "rename" "to" new_table_name ;
rename_column_stmt = "rename" [ "column" ] old_column_name "to" new_column_name ;
add_column_stmt = "add" [ "column" ] column_def ;
drop_column_stmt = "drop" [ "column" ] column_name ;
add_constraint_stmt = "add" table_constraint ;
alter_pk_stmt = "alter" "primary" "key" "(" [ pk_col { "," pk_col } ] ")" ;
pk_col = column_name [ "asc" | "desc" ] ;
/* Transaction statements */
begin_stmt = "begin" [ "deferred" | "immediate" | "exclusive" ] [ "transaction" ] ;
commit_stmt = "commit" [ "transaction" ] ;
rollback_stmt = "rollback" [ "transaction" ] [ "to" [ "savepoint" ] savepoint_name ] ;
savepoint_stmt = "savepoint" savepoint_name ;
release_stmt = "release" [ "savepoint" ] savepoint_name ;
/* PRAGMA statement */
pragma_stmt = "pragma" pragma_name [ "=" pragma_value ] ;
/* ANALYZE statement */
analyze_stmt = "analyze" [ [ schema_name "." ] table_name ] ;
pragma_value = signed_number | name | string_literal ;
/* Declarative schema statements */
declare_schema_stmt = "declare" "schema" schema_name
[ "version" string_literal ]
[ "using" "(" schema_option { "," schema_option } ")" ]
"{" { schema_item } "}" ;
schema_option = identifier "=" string_literal ;
schema_item = "table" table_name ( "{" | "(" ) column_def { "," ( column_def | table_constraint ) } ( "}" | ")" )
[ "using" module_name [ "(" module_arg { "," module_arg } ")" ] ] ";"
| "index" index_name "on" table_name "(" indexed_column { "," indexed_column } ")" ";"
| "view" view_name [ "(" column_name { "," column_name } ")" ] "as" select_stmt ";"
| seed_item ;
seed_item = "seed" table_name [ "(" column_name { "," column_name } ")" ]
[ "values" ] "(" expr { "," expr } ")" { "," "(" expr { "," expr } ")" } ";" ;
diff_schema_stmt = "diff" "schema" schema_name ;
apply_schema_stmt = "apply" "schema" schema_name
[ "to" "version" string_literal ]
[ "with" "seed" ]
[ "options" "(" schema_option { "," schema_option } ")" ] ;
explain_schema_stmt = "explain" "schema" schema_name
[ "version" string_literal ] ;
/* Basic elements */
expr = literal_value
| identifier
| unary_operator expr
| expr binary_operator expr
| function_call
| "(" expr ")"
| cast_expr
| expr "collate" collation_name
| expr [ "not" ] "like" expr [ "escape" expr ]
| expr [ "not" ] "glob" expr
| expr [ "not" ] "regexp" expr
| expr [ "not" ] "in" ( "(" [ select_stmt | expr { "," expr } ] ")" | table_name )
| expr "is" [ "not" ] expr
| expr [ "not" ] "between" expr "and" expr
| [ "exists" ] "(" select_stmt ")"
| case_expr
| window_function ;
literal_value = numeric_literal | string_literal | blob_literal | "null" | "true" | "false" ;
numeric_literal = [ "+" | "-" ] ( integer_literal | float_literal ) ;
integer_literal = digit+ ;
float_literal = digit+ "." digit* [ "e" [ "+" | "-" ] digit+ ]
| "." digit+ [ "e" [ "+" | "-" ] digit+ ]
| digit+ "e" [ "+" | "-" ] digit+ ;
string_literal = "'" { character } "'" { "'" { character } "'" } ;
blob_literal = "x'" hex_digit+ "'" ;
identifier = [ schema_name "." ] name ;
schema_name = name ;
table_name = [ schema_name "." ] name ;
column_name = [ table_name "." ] name ;
collation_name = name ;
function_name = name ;
function_call = function_name "(" [ [ "distinct" ] expr { "," expr } ] ")" ;
cast_expr = "cast" "(" expr "as" type_name ")" ;
case_expr = "case" [ expr ] { "when" expr "then" expr } [ "else" expr ] "end" ;
window_function = function_call "over" window_name_or_specification ;
window_name_or_specification = window_name | "(" window_specification ")" ;
window_specification = [ window_name ] [ "partition" "by" expr { "," expr } ] [ "order" "by" ordering_term { "," ordering_term } ] [ frame_spec ] ;
frame_spec = ( "range" | "rows" ) ( frame_bound | "between" frame_bound "and" frame_bound ) [ frame_exclude ] ;
frame_bound = "unbounded" "preceding"
| "current" "row"
| "unbounded" "following"
| expr "preceding"
| expr "following" ;
frame_exclude = "exclude" "no" "others"
| "exclude" "current" "row"
| "exclude" "group"
| "exclude" "ties" ;
/* Basic lexical elements */
name = identifier_start_char { identifier_char } ;
identifier_start_char = alpha | "_" ;
identifier_char = alpha | digit | "_" ;
alpha = "a" | "b" | ... | "z" | "A" | "B" | ... | "Z" ;
digit = "0" | "1" | ... | "9" ;
hex_digit = digit | "a" | "b" | "c" | "d" | "e" | "f" | "A" | "B" | "C" | "D" | "E" | "F" ;
unary_operator = "-" | "+" | "~" | "not" ;
binary_operator = "||" | "*" | "/" | "%" | "+" | "-" | "<<" | ">>" | "&" | "|"
| "<" | "<=" | ">" | ">=" | "=" | "==" | "!=" | "<>"
| "and" | "or" | "xor"
| "->" | "->>" ;This grammar defines the syntax of SQL statements supported by Quereus. While it captures most of the language features, some specialized constructs and edge cases may not be fully represented. For the definitive reference, always consult the Quereus parser implementation.