[VL][Delta] Add native deletion vector scan foundation#11900
Draft
malinjawi wants to merge 24 commits intoapache:mainfrom
Draft
[VL][Delta] Add native deletion vector scan foundation#11900malinjawi wants to merge 24 commits intoapache:mainfrom
malinjawi wants to merge 24 commits intoapache:mainfrom
Conversation
added 2 commits
April 9, 2026 14:47
Implements complete read-only Deletion Vector support for Delta Lake: - C++ DV reader and row index finder (14 files) - JNI bindings for native DV operations (2 files) - Runtime integration for DV-aware scans (3 files) - Scala preprocessing and scan preparation (3 files) - Comprehensive test coverage (5 files) This is the foundation for native DV MoR operations. Read-only implementation ensures no risk of data corruption while enabling performance testing of the native read path. Key components: - DeltaDeletionVectorReader: Reads DV files from storage - DeltaRowIndexFinder: Identifies deleted rows during scans - WholeStageResultIterator: Integrates DV filtering into result iteration - PreprocessTableWithDVs: Prepares Delta tables with DVs for native execution Total: 27 files (read-only DV infrastructure only)
|
Run Gluten Clickhouse CI on x86 |
7 similar comments
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
7025b9b to
4fbb08e
Compare
|
Run Gluten Clickhouse CI on x86 |
This patch added a docker image with maven cache for spark package/test, covering 3.3/3.4/3.5/4.0/4.1. A second patch will be made to enable this cache in the CI jobs
Clean up redundant logging. fixes: apache#11863
bump the actions to match apache policies, also fixed the cache image build Signed-off-by: Yuan <yuanzhou@apache.org>
* [GLUTEN-6887][VL] Daily Update Velox Version (dft-2026_04_01) Upstream Velox's New Commits: 24e6ab97b by Chengcheng Jin, fix(cudf): Fix complex data type name in format conversion and add tests(Part1) (#16818) d92b90029 by Natasha Sehgal, refactor: Propagate CastRule cost through canCoerce (#16821) 361a42252 by Rui Mo, fix(fuzzer): Reduce Spark aggregate fuzzer test pressure (#16964) 2c2fe2ab7 by root, fix: Ignore string column statistics for parquet-mr versions before 1.8.2 (#16744) 7faf27a86 by Chengcheng Jin, feat(cudf): Add the log to show detailed fallback messgae (#16900) e603315e5 by Chang chen, feat(parquet): Add type widening support for INT and Decimal types with configurable narrowing (#16611) 1e1674dd8 by Rajeev Singh, docs: Add blog post for Adaptive per-function CPU tracking (#16945) 0c6b89d61 by Masha Basmanova, fix(build): Guard fuzzer examples subdirectory with VELOX_BUILD_TESTING (#16992) 8d6355d8d by Pratik Pugalia, build: Improve build impact comment layout (#16971) 44d561990 by Masha Basmanova, refactor: Add ConnectorRegistry class with tryGet and unregisterAll (#16977) 793f13f16 by Rajeev Singh, feat(expr-eval):Adaptive per-function CPU sampling for Velox expression evaluation (#16646) 1a4dc7a5a by Pratik Pugalia, fix: Off-by-one boundary bug in make_timestamp validation (#16944) 7f2c75c26 by Pratik Pugalia, Fix incorrect substr length in Tokenizer::matchUnquotedSubscript (#16972) 22b90045e by Masha Basmanova, docs: Add truncate markers to blog posts for cleaner listing page (#16975) Signed-off-by: glutenperfbot <glutenperfbot@glutenproject-internal.com> * Fix SPARK-18108: exclude partition columns from HiveTableHandle dataColumns When Gluten creates HiveTableHandle, it was passing all columns (including partition columns) as dataColumns. This caused Velox's convertType() to validate partition column types against the Parquet file's physical types, failing when they differ (e.g., LongType in file vs IntegerType from partition inference). Fix: build dataColumns excluding partition columns (ColumnType::kPartitionKey). Partition column values come from the partition path, not from the file. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Point Velox to PR3 branch with parquet type widening support * Update VeloxTestSettings for Velox PR2 With OAP INT narrowing commit replaced by upstream Velox PR #15173: - Remove 2 excludes now passing: LongType->IntegerType, LongType->DateType - Add 2 excludes for new failures: IntegerType->ShortType (OAP removed) Exclude 63 (net unchanged: -2 +2). Test results: 21 pass / 63 ignored. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Disable native writer for ParquetTypeWideningSuite This suite tests the READ path only. Disable native writer so Spark's writer produces correct V2 encodings (DELTA_BINARY_PACKED/DELTA_BYTE_ARRAY). - Remove 10 excludes for decimal widening tests now passing Remaining 38 excludes: - 34: Velox native reader rejects incompatible decimal conversions regardless of reader config (no parquet-mr fallback) - 4: Velox does not support DELTA_BYTE_ARRAY encoding Test results: 46 pass / 38 ignored. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Override 33 type widening tests with expectError=true Velox native reader always behaves like Spark's vectorized reader, so tests that rely on parquet-mr behavior (vectorized=false) fail. Instead of just excluding these 33 tests, add testGluten overrides with expectError=true to verify Velox correctly rejects incompatible conversions. - 16 unsupported INT->Decimal conversions - 6 decimal precision narrowing cases - 11 decimal precision+scale narrowing/mixed cases VeloxTestSettings: 38 excludes (parent tests) + 33 testGluten overrides Test results: 79 pass / 38 ignored (33 excluded parent + 5 truly excluded) * fix velox rebase Signed-off-by: Yuan <yuanzhou@apache.org> * ignore ut Signed-off-by: Yuan <yuanzhou@apache.org> * ignore more ut Signed-off-by: Yuan <yuanzhou@apache.org> * fix ignore api Signed-off-by: Yuan <yuanzhou@apache.org> * ignore failed ut Signed-off-by: Yuan <yuanzhou@apache.org> * fix on clickhouse tpcds queries the testing data on clickhouse side is not upated, so revert to use the old query Signed-off-by: Yuan <yuanzhou@apache.org> * fix q30 * ignore ut * fix Signed-off-by: Yuan <yuanzhou@apache.org> --------- Signed-off-by: glutenperfbot <glutenperfbot@glutenproject-internal.com> Signed-off-by: Yuan <yuanzhou@apache.org> Co-authored-by: glutenperfbot <glutenperfbot@glutenproject-internal.com> Co-authored-by: Chang chen <changchen@microsoft.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: Chang chen <changchen@apache.org> Co-authored-by: Yuan <yuanzhou@apache.org>
…ession (apache#11679) Check the CrossRelNode's expression, fallback it if experssion is not supported. Fix apache#11678.
…back (apache#11720) This PR adds a config to control fallback validation for TimestampNTZType in the Velox backend and adds a test for localtimestamp(). Currently, the validator treats TimestampNTZType as unsupported and forces the query to fall back to Spark. This makes it hard to develop and test features related to TimestampNTZ, including functions like localtimestamp(). With this change, the validation rule can be temporarily disabled during development and testing. Related issue: apache#1433 Co-authored-by: Mariam-Almesfer <mariam.almesfer@ibm.com>
…when effective row count < 2 (apache#11850) Co-authored-by: xumingyong <xumingyong@bigo.sg>
…eordered' suite (apache#11884) After apache#9473 , there is an issue when executing suite 'Eliminate two aggregate joins with attribute reordered'.
Signed-off-by: glutenperfbot <glutenperfbot@glutenproject-internal.com> Co-authored-by: glutenperfbot <glutenperfbot@glutenproject-internal.com>
Since Spark 3.2 support was dropped a few months ago, the related tests can be removed now. related: apache#11379
4fbb08e to
9b95547
Compare
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue: #11901
What changes are proposed in this pull request?
This PR adds the native Delta deletion vector scan foundation for the Velox backend in Gluten.
The architecture is based of the design of Delta Lake's: Deletion Vectors High Level Design.
The scope is intentionally limited to the read path. It introduces the JVM and native plumbing required to read Delta tables with deletion vectors natively, while keeping DML and DV write-path work out of scope for a follow-up PR.
The main changes are:
cpp/velox/compute/delta, including:RoaringBitmapArrayhelper needed by the native DV read path and tests.This PR does not include:
How was this patch tested?
cpp/velox/compute/delta/tests:DeltaConnectorTestDeltaDeletionVectorReaderTestDeltaSplitTestDeltaUuidUtilsTestveloxtarget and compile through the new Delta translation units.Was this patch authored or co-authored using generative AI tooling?
Co-authored: IBM Bob