Skip to content

Commit bad6e3e

Browse files
committed
Fix flaky 00156_max_execution_speed_sample_merge by using small data and more predictable timings, #3
1 parent 0cb8c0d commit bad6e3e

1 file changed

Lines changed: 3 additions & 1 deletion

File tree

tests/queries/0_stateless/00156_max_execution_speed_sample_merge.sql

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,11 @@
55
-- NOTE: This test uses simple synthetic data to validate the fact throttling was applied.
66
-- If throttling works as expected - each execution will take >= 1 second, as we allow not more than {max_execution_speed} records/seconds
77
-- If it doesn't - each select will finish immediately, and the test will fail
8+
-- NOTE: Setting max_block_size=1 to ensure sleepEachRow(..) applies per each row guaranteed and the resulting timing is predictable [2-3] seconds
89

9-
SET max_execution_speed = 5;
10+
SET max_execution_speed = 10;
1011
SET timeout_before_checking_execution_speed = 0;
12+
SET max_block_size = 1;
1113

1214
CREATE TEMPORARY TABLE times (t DateTime);
1315

0 commit comments

Comments
 (0)