Skip to content

Latest commit

 

History

History
92 lines (89 loc) · 14.3 KB

File metadata and controls

92 lines (89 loc) · 14.3 KB
Feature Altinity PR First Altinity Release upstream PR First upstream Release
▶️ SECURITY
Token-based authentication and authorization #1078 25.8.12
---
▶️ PERFORMANCE
Distributed execution: better split tasks by row groups IDs #1237 25.8.12 ClickHouse#87508 25.11.1
Enable parquet reader v3 by default #1232 25.8.12 ClickHouse#88827 25.11.1
Set max message size on parquet v3 reader #1198 25.8.12 ClickHouse#91737 25.12.1
ListObjectsV2 cache #743 25.2.2
Lazy load metadata for metadata for DataLake #742 25.2.2
Parquet file metadata caching: clear cache #713 25.2.2
Parquet file metadata caching #586 24.12.2
Parquet file metadata caching: use cache for parquetmetadata format #636 24.12.2
Parquet file metadata caching: turn cache on by default #669, #674 24.12.2
---
▶️ SWARMS
Profile events for task distribution in ObjectStorageCluster requests #1172 25.8.12
SYSTEM STOP SWARM MODE command for graceful shutdown of swarm node #1014 25.6.5
JOIN with *Cluster table functions and swarm queries #972 25.6.5
Restart cluster tasks on connection lost #780 25.6.5
Add iceberg_metadata_file_path to query when send it to swarm nodes #898 25.3.3
Setting lock_object_storage_task_distribution_ms to improve cache locality with swarm cluster #866 25.3.3
Setting object_storage_max_nodes #677 25.2.2
Convert functions with object_storage_cluster setting to cluster functions #712 25.2.2
Fix remote call of s3Cluster function #583 24.12.2
Limit parsing threads for distributed case #648 24.12.2
Distributed request to tables with Object Storage Engines #615 24.12.2
---
▶️ CATALOGS
Google cloud storage support for data lakes #1318 25.8.14 ClickHouse#93866 26.1.1
Add metrics for Iceberg, S3, and Azure #1123 25.8.9 ClickHouse#93332
Add icebergLocalCluster table function to allow cluster reads from shared disk #1120 25.8.9 ClickHouse#93323 26.1.1
Setting iceberg_timezone_for_timestamptz for Iceberg TimestampTZ type #1103 25.8.9
Allow to read Iceberg data from any location #1092 25.8.9
Read optimization using Iceberg metadata #1019 25.8.9
Lazy load metadata for metadata for DataLake #742 25.8.9
Expose IcebergS3 table metainformation in system.tables #959 25.6.5
Support different warehouses behind Iceberg REST catalog #860 25.3.3
General engine definition for Iceberg tables #675 25.2.2
RBAC for S3 #688 25.2.2
---
▶️ TIERED STORAGE
Add query id to system.part_log, system_exports and system.replicated_partition_exports #1330 25.8.14
Allow merge tree materialized / alias columns to be exported through part export #1324 25.8.14
Accept table function as destination for part export #1320 25.8.14
Add settings to control the behavior of pending mutations / patch parts on the export part feature #1294 25.8.14
Add support for ALIAS columns in segments for engine=Hybrid #1272 25.8.14
Add ability to split large parquet files on part export #1229 25.8.12
Add experimental support to automatically reconcile column-type mismatches across segments in Hybrid table engine #1156 25.8.9
Preserve the entire format settings object in export part manifest #1144 25.8.9
Export partition support for ReplicatedMergeTree engine #1124 25.8.9
Allow any partition strategy to accept part export #1083 25.8.9
Engine=hybrid implementation #1071 25.8.9
Add observability for EXPORT PART #1017 25.6.5
Simple MergeTree part export to object storage #1009 25.6.5
s3Cluster hive partitioning for old analyzer #703 25.2.2 ClickHouse#93284
s3Cluster hive partitioning #584 24.12.2 ClickHouse#73910
---
↔️ Feature parity in the latest Antalya release ↔️
---
Allow key-value arguments in s3/s3cluster engine #1028 25.6.5 ClickHouse#85134 25.8.1
AWS S3 authentication with an explicitly provided IAM role #986 25.6.5 ClickHouse#84011 25.8.1
Support for hive partition style reads and writes #934 25.6.5 ClickHouse#76802 25.8.1
Support compressed metadata in Iceberg #1005 25.6.5 ClickHouse#81451 25.7.1
Support TimestampTZ in Glue catalog #992 25.6.5 ClickHouse#83132 25.7.1
Support writing parquet enum as byte array #989 25.6.5 ClickHouse#81090 25.7.1
Iceberg table pruning in cluster requests #770 25.2.2 ClickHouse#82131 25.7.1
Support for Iceberg partition pruning bucket transform #786 25.3.3 ClickHouse#79262 25.5.1
Improve performance of hive path parsing #734 25.2.2 ClickHouse#79067 25.5.1
Iceberg metadata files cache #733 25.2.2 ClickHouse#77156 25.5.1
Rendezvous hashing filesystem cache #709 25.2.2 ClickHouse#77326 25.5.1
Better S3 URL parsing for Hive partitioning #700 25.2.2 ClickHouse#78185 25.5.1
Support MinMax index for Iceberg #733 25.2.2 ClickHouse#78242 25.4.1
Support partition pruning in DeltaLake engine #733 25.2.2 ClickHouse#78486 25.4.1
Iceberg time travel by snapshots #733 25.2.2 ClickHouse#77439 25.4.1
Cluster auto discovery #629 24.12.2 ClickHouse#76001 25.3.1
Alternative syntax for object storage cluster functions #592 24.12.2 ClickHouse#70659 25.3.1
Unity catalog integration same as upstream => 25.3.3 ClickHouse#76988 25.3.1
Glue catalog integration same as upstream => 25.3.3 ClickHouse#77257 25.3.1
Parquet: merge bloom filter and min/max evaluation #590 24.12.2 ClickHouse#71383 25.2.1
Parquet: Int logical type support on native reader #589 24.12.2 ClickHouse#72105 25.1.1
Iceberg REST Catalog integration same as upstream => 24.12.2 ClickHouse#71542 24.12.1
Parquet: boolean support on native reader same as upstream => 24.12.2 ClickHouse#71055 24.11.1
Auxiliary autodiscovery #531 24.12.2 ClickHouse#71911 24.11.1
Parquet: bloom filters support same as upstream => 24.12.2 ClickHouse#62966 24.10.1
Parquet: page header v2 support on native reader same as upstream => 24.12.2 ClickHouse#70807 24.10.1

© 2024-2026 Altinity Inc. All rights reserved. Altinity®, Altinity.Cloud®, and Altinity Stable Builds® are registered trademarks of Altinity, Inc. ClickHouse® is a registered trademark of ClickHouse, Inc.