All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- Oracle dialect lineage — full Oracle dialect support in the analyzer and lineage extractors, surfacing Oracle-specific pseudocolumns (
ROWID,ROWNUM,LEVEL,SYSDATE, etc.) and DML write targets forUPDATE,DELETE, andMERGE - SQL
COMMENTdescriptions on nodes — inline and trailing SQL comments on table/column definitions are surfaced as node descriptions for display in downstream UIs name_spansandbody_spanon Node (#30) — each node now carries the source spans of every occurrence of its name plus the span of its defining body, enabling graph↔text navigation in editors- Flattened
AnalyzeResult(#19, #33) —nodesandedgesare now top-level fields onAnalyzeResultinstead of nested undergraph, simplifying consumer code. Breaking change for consumers that accessedresult.graph.nodes/result.graph.edges.
- Dali-compatible output layer — new
--format daliexport target for Dali lineage interop, with ownership-only DML target detection, source-expression-preserving column refs, and defensive backward-traversal caps on large graphs
--format daliexport support; requires SQL input and returns contextual export errors instead of silently emitting empty packages or panicking on JSON serialization failures
- SQL IntelliSense in the editor (#26) — context-aware completions (tables, columns, keywords, functions) in
SqlViewbacked by the flowscope engine, with Tab-accept behavior that beats CodeMirror'sindentWithTabbinding - VS Code
CompletionItemProvider(#26) — the VS Code extension now delivers the same flowscope-backed SQL completions inside VS Code - Text→graph navigation (#24) — clicking in the SQL editor reveals the corresponding lineage node in the graph
- Graph→text navigation (#31) — clicking a graph node cycles through its occurrences in the SQL editor
- Stale-graph banner + Run-button rename (#21, #22) — the graph surfaces a banner when its data has drifted from the editor, and the Run button is renamed for clarity
- dbt multi-model chains now render as connected lineage (#32) — a dbt model's bare
SELECTis materialized as the canonical Table node for the model name instead of a per-statement Output node, so when a downstream file references it via{{ ref(...) }}the producer and consumer collapse into a single graph node and multi-hopA → B → Cpipelines show end-to-end - Ephemeral dbt models preserved through CTE hiding — dbt model sinks are marked with a
dbtModelSinkflag sohide_ctesretains ephemeral models in the global graph, and definition occurrences are tracked separately so statement-scoped views surface the producer span before any consumer refs - Ambiguous dbt model materialization surfaced — duplicate model names across files now emit
SCHEMA_CONFLICTwith an unambiguous warning about definition vs. producer semantics - dbt CTE and output-node collisions fixed (#15) — explicit output nodes are included in script-level dependencies and edge IDs are now collision-safe via statement-scope metadata
- Column-lineage pruning refined — tighter pruning rules eliminate spurious column edges without dropping legitimate lineage
- Stale analysis cache invalidated — the web app's analysis cache sentinel is bumped to
v5so pre-flattenAnalyzeResultentries no longer rehydrate into a slot whereresult.nodesis undefined
- Reveal targeting scoped to the active editor — reveal lookups no longer cross editors in multi-file controlled setups, and reveal clears no longer trigger spurious navigation
- Graph builders share output-node and edge helpers, reducing duplication between the script-level and statement-level paths
- Span helpers are hardened against UTF-8 panics and relation span lookup is unified
serveis no longer a default feature.cargo install flowscope-clinow installs the core CLI (analysis, linting, fixing, exports) without the bundled local web server. Install withcargo install flowscope-cli --features servefor serve mode. This matches the behavior already documented in the README.
- Alias-shadowing coverage added for nested completion scopes
- Instance-aware relation tracking for self-join support — each table occurrence is tracked as a distinct instance, enabling correct lineage through self-joins
- Instance-aware filters, wildcards, and global lineage propagation
- Orphan edge cleanup and alias limits for hardened instance tracking
- Join attributes on edges — join metadata (type, conditions) is now attached to edges rather than nodes, better reflecting the relational model
- Analysis state lifted to Workspace for cross-statement coordination
- Filter predicates attached to wrong table nodes
- Source-less projections missing base table lineage edges
- Join dependency edges not created for aggregate and edge-case queries
- Analyzer lineage regressions in instance-aware subquery column scoping
- Linting disabled by default in web app — linting was hardcoded as always-on in the analysis worker, causing significant performance degradation on large projects. The "Show Lint Issues" toggle now controls both display and computation, defaulting to off.
- Rebuilt WASM artifacts and formatted generated code
- Oracle dialect support across CLI (
-d oracle), core engine, and web app UI - Autofix metadata types,
applyEditsutility, and lint provenance fields
- Bumped sqlparser from 0.59 to 0.61 — migrates all AST pattern matches to the new tuple-variant API where
Statement,TableConstraint,ColumnOption, andMergeActionvariants now wrap dedicated structs - Adapted to
ConnectBybecomingVec<ConnectByKind>,TableAliasgaining anexplicitfield, andTableFactor::Derivedgaining asamplefield
- Added linter documentation to README files across the monorepo (core, CLI, export crates)
- Aligned npm package versions with Cargo workspace version (was 0.3.1, now 0.4.1)
- Rebuilt WASM artifacts from current Rust source
- Full SQL linting engine with 60+ rules targeting SQLFluff parity
- Aliasing rules (AL001–AL009): alias expression style, column alias quoting, unused aliases, duplicate column names, alias length limits, force-enable aliasing
- Ambiguity rules (AM001–AM009): DISTINCT with GROUP BY, UNION type handling, ORDER BY column ordinals, ambiguous column counts, bare JOINs, set-column mismatches, implicit cross joins, limit without ORDER BY
- Capitalisation rules (CP001–CP005): keyword, identifier, function name, literal, and type-annotation casing with consistent/upper/lower modes and
ignore_words_regexsupport - Convention rules (CV001–CV012): operator style, COALESCE usage, trailing commas, COUNT row-style, IS NULL comparison, statement terminators, unnecessary brackets, LEFT JOIN preference, block quotes, quoted literals, CAST shorthand, implicit join detection
- Jinja rules (JJ001): template tag detection with trim-marker parity
- Layout rules (LT001–LT015): trailing whitespace, indentation, operator placement, comma position, long lines, function spacing, bracket placement, CTE newlines, SELECT modifiers, set operators, end-of-file newlines, start-of-file blanks, keyword line position, blank line limits
- Reference rules (RF001–RF006): keyword references, qualified wildcards, single-table qualifications, keyword aliases, identifier quoting policies
- Structure rules (ST001–ST012): ELSE/THEN on new lines, simple CASE, unused CTEs, nested CASE, subquery-to-CTE, column ordering, USING joins, DISTINCT parentheses, join condition order, constant predicates, unused joins, consecutive semicolons
- T-SQL rules (TQ001–TQ003): SP_ prefix, SET NOCOUNT, batch separators
- Rule configuration system supporting SQLFluff
.sqlfluffconfig format with per-rule options (e.g.,capitalisation_policy,line_position,max_line_length,allow_scalar) - Dialect-aware detection adapting rule behavior for PostgreSQL, BigQuery, Snowflake, Redshift, SparkSQL, MSSQL, and others
- Autofix metadata on issues with
Safe/Unsafeapplicability markers and patch edits
--lintflag for SQL linting with JSON, compact, and human-readable output formats--fixflag for automatic SQL fixing with safe-only default and--legacy-ast-fixesfor additional rewrites--fix-onlymode for applying fixes without re-linting--exclude-rulesfor disabling specific lint rules--rule-configs-jsonfor passing rule configuration as JSON--show-fixesto display applied fix details--jobsfor parallel multi-file lint/fix processing--no-respect-gitignorefor including gitignored files- Fix engine with iterative convergence loop, overlap recovery, and incremental rule evaluation budgets
- Performance-tuned large-SQL handling with capped fix passes and targeted LT02 cleanup
- GitHub Actions CI workflow for lint, test, and parity validation
- Migrated all lint rules from regex to AST/tokenizer-driven detection for accuracy and performance
- Shared document token stream across rules to avoid redundant tokenization
- Token-span-based issue reporting for precise editor navigation
- Integrated linter into the web app: lint is now enabled by default in the analysis worker
- Updated WASM bindings to include lint API surface (
LintIssue,LintRequest,LintConfigtypes) - Lint-aware issues panel with updated filtering
- Updated mascot to purple-headed duck and added favicon, removed beta badge
- Enable
servefeature by default socargo install flowscope-cliincludes the web UI server - Fix rust-embed asset path for embedded web UI (was incorrectly pointing to workspace root)
- Jinja/dbt templating support: MiniJinja-based preprocessing for dbt projects
- Built-in dbt macros:
ref(),source(),config(),var(),is_incremental() - RelationEmulator for dbt Relation object attribute access (
.schema,.identifier) thisglobal variable andexecuteflag for templates- Custom macro passthrough stubs for graceful handling
- Built-in dbt macros:
- COPY statement lineage: Track source/target tables in COPY/COPY INTO (PostgreSQL, Snowflake)
- ALTER TABLE RENAME lineage: Track table renames as dataflow edges
- UNLOAD statement lineage: Track source tables from Redshift UNLOAD statements
- Lateral column alias support: Resolve aliases in same SELECT list (BigQuery, Snowflake, DuckDB, etc.)
- Backward column inference: Infer columns for SELECT * without schema from downstream usage
- Type inference for SQL expressions with comprehensive type checking
- New
TYPE_MISMATCHwarning code for detecting incompatible type comparisons and operations - Schema-aware column type lookup - column references now resolve types from provided schema metadata
- CTE column type propagation to outer queries
- Dialect-aware type compatibility rules (e.g., Boolean/Integer comparison allowed in MySQL but not PostgreSQL)
- NULL comparison anti-pattern detection (
= NULLwarns to useIS NULLinstead)
- Smart function completions with signature metadata (params, return types, categories)
- Context-aware scoring: boost aggregates in GROUP BY, window functions in OVER clauses
- Lateral column alias extraction with proper scope isolation
- Type-aware column scoring in comparison contexts
- Dialect-aware keyword parsing using sqlparser tokenizer
- CASE expression type inference from THEN/ELSE branches
--template jinja|dbtflag for templated SQL preprocessing--template-var KEY=VALUEfor template variable injection--metadata-urlfor live database schema introspection (PostgreSQL, MySQL, SQLite)--metadata-schemafor schema filtering during introspection- Serve mode: Run FlowScope as a local HTTP server with embedded web UI
--serveflag to start HTTP server with embedded React app--port <PORT>to specify server port (default: 3000)--watch <DIR>for directories to watch for SQL files (repeatable)--opento auto-open browser on startup- REST API endpoints:
/api/analyze,/api/completion,/api/files,/api/export/:format - File watcher with 100ms debounce for automatic reload on changes
- Assets embedded at compile time via rust-embed for single-binary deployment
- Web workers for graph/matrix/layout computations (improved UI responsiveness)
- LayoutProgressIndicator component for visual layout feedback
- Debug flags (GRAPH_DEBUG, LAYOUT_DEBUG) for performance diagnostics
- Template mode selector for dbt/Jinja SQL preprocessing in the toolbar
- Issue-to-editor navigation: click issues to jump to source location
- Issues tab filtering: filter by severity, error code, and file
- Stats popover: complexity dots trigger dropdown with table/column/join counts
- Clear analysis cache option in project menu
- Bundled "dbt Jaffle Shop" demo project showcasing ref/source/config/var
- Backend adapter pattern for REST/WASM detection (supports serve mode)
- Read-only mode for files loaded from CLI serve backend
- Schema display from database introspection in serve mode
- Unified type system:
CanonicalTypereplaces internalSqlTypeenum with broader coverage (Time, Binary, Json, Array) OutputColumn.data_typenow populated with inferred types for SELECT expressions- Type checking now accepts dialect parameter for dialect-specific rules
- dbt model cross-statement linking with proper case normalization for Snowflake
- DDL-seeded schema preservation (schemas no longer overwritten by later queries)
- UTF-8 safety in
should_show_for_cursorwhen cursor offset is mid-character
- Schema-unaware type checking: TYPE_MISMATCH warnings only detect mismatches between literals, CASTs, and known function return types. Column type mismatches (e.g.,
WHERE users.id = users.emailwhereidis INTEGER andemailis TEXT) are not detected because expression type inference does not yet resolve column types from schema metadata. - No expression spans: TYPE_MISMATCH warnings include the statement index but not precise source spans for editor integration. This is because sqlparser's expression AST nodes don't include span information by default.
- First public release of the FlowScope Rust + WASM + TypeScript stack
- Multi-dialect SQL parsing with table/column lineage, schema validation, and editor-friendly spans
- Export tooling for Mermaid, JSON, HTML, CSV bundles, and XLSX across Rust and WASM
- React components + CLI for lineage visualization and data export workflows
- CTE and derived table definitions now include spans for editor navigation
- Export downloads in React normalize byte buffers for reliable Blob creation
- Breaking: Changed error code from
REQUEST_PARSE_ERRORtoINVALID_REQUESTfor JSON parse/validation errors inanalyze_sql_json(). Consumers matching on the old error code should update their error handling.
- Fixed potential panic in
extract_qualifierwhen cursor offset lands on invalid UTF-8 boundary - Fixed early-return bug in completion logic that incorrectly suppressed completions when schema metadata lacked column info but query context (CTEs/subqueries) had valid columns
- Fixed potential integer overflow in completion item scoring by using saturating arithmetic
- Added spans for CTE and derived table definition nodes to support editor navigation
- Bundle reserved keyword list inside the crate for publish-time builds
- Fixed ZIP/XLSX downloads by normalizing export byte buffers for Blob creation
- Added named constants for completion scoring values to improve code maintainability
- Added
Debugderive toQualifierResolutionfor easier debugging - Added comprehensive unit tests for string helper functions (
extract_last_identifier,extract_qualifier) and qualifier resolution logic
- SQL parsing with sqlparser-rs for multiple dialects
- Table-level lineage extraction from SELECT, JOIN, CTE, INSERT, CTAS, UNION
- Cross-statement lineage tracking via GlobalLineage
- Schema metadata support for table validation
- UNKNOWN_TABLE warning when tables not in provided schema
- Structured issue reporting with severity levels (error, warning, info)
- Graceful degradation - partial results on parse failures
- WebAssembly bindings for browser usage
- JSON-in/JSON-out API via
analyze_sql_json() - Legacy
analyze_sql()function for backwards compatibility - Version info via
get_version()
- WASM loader with
initWasm()function - Type-safe
analyzeSql()function - Complete TypeScript type definitions
- JSDoc documentation on all public types
- Generic SQL
- PostgreSQL
- Snowflake
- BigQuery
- SELECT (with aliases, subqueries)
- JOIN (INNER, LEFT, RIGHT, FULL, CROSS)
- WITH / CTE (multiple, nested references)
- INSERT INTO ... SELECT
- CREATE TABLE AS SELECT
- UNION / UNION ALL / INTERSECT / EXCEPT
- User guides (quickstart, schema metadata, error handling)
- Dialect coverage matrix
- API documentation (rustdoc, TypeDoc)
- Test fixtures for all dialects
- 45+ Rust unit tests
- 11 TypeScript type tests
- Test fixtures per dialect (generic, postgres, snowflake, bigquery)
- CI/CD with GitHub Actions
- Column-level lineage not yet implemented (planned for v0.2.0)
- Recursive CTEs generate warning, lineage may be incomplete
- UPDATE, DELETE, MERGE statements not yet supported