Further query perf updates for oximeter field lookups.#10110
Further query perf updates for oximeter field lookups.#10110
Conversation
bnaecker
left a comment
There was a problem hiding this comment.
This is clever! I'd like to see some performance numbers if possible. Could we look at the returned query summary from some representative OxQL queries, and see what the performance impact is? If possible, looking at memory consumption would be nice too, but we don't have an easy way to access that per-query. You would have to take the query ID and go back to the system.query_log table, I think.
oximeter/db/src/client/oxql.rs
Outdated
| .into_iter() | ||
| .collect(); |
There was a problem hiding this comment.
Does this actually need to be a Vec<_>? It looks like we iterate over it below, which could happen just as well with the set itself.
There was a problem hiding this comment.
No, this is just noise from Claude that I didn't catch earlier. Updated to use a set instead.
| } else { | ||
| query.push_str(" WHERE "); | ||
| } | ||
| query.push_str(" HAVING "); |
There was a problem hiding this comment.
Is it helpful to have a test checking this new syntax element? I always find it hard to build a SQL query programmatically without seeing the final output.
There was a problem hiding this comment.
Updated the existing test to use a predicate so that our expectorated test file includes the HAVING clause.
| @@ -0,0 +1,44 @@ | |||
| SELECT | |||
| assumeNotNull(anyIf(fields_i32, field_name = 'foo')) AS foo, | |||
| assumeNotNull(anyIf(fields_u32, field_name = 'index')) AS INDEX, | |||
There was a problem hiding this comment.
Why is INDEX capitalized here?
There was a problem hiding this comment.
Because we're using sqlformat to format the generated sql, so that it's legible to humans, and INDEX is a sql keyword. Updated to quote identifiers so that we get AS "index" rather than AS INDEX.
There was a problem hiding this comment.
Cool thanks! I think quoting an identifier is more obvious than the capitalization. I definitely thought this was case-insensitive! ClickHouse SQL is weird in lots of ways though :)
oximeter/db/src/client/oxql.rs
Outdated
| let union_parts: Vec<String> = field_types | ||
| .iter() | ||
| .map(|&this_type| { | ||
| let value_cols: Vec<String> = field_types |
There was a problem hiding this comment.
This nested-loop strikes me as inefficient, given that we know that all but one of the elements is going to be NULL AS <some_table_name>. I wonder if we can:
- build an array of all the possible
NULL AS fields_{type}entries we'll need - on L995, call
enumerate().iter()instead of just.iter() - inside the
.map()on L996, clone the array and overwrite the one element at that index with the string likefield_value AS fields_{type}instead
I'm not positive that will be noticeably better, given the small sizes, but it's definitely asymptotically better.
There was a problem hiding this comment.
Done. I don't think we'll see much of a performance difference, since the number of field types is small and query latency is dominated by clickhouse, but the new version is also more readable.
There was a problem hiding this comment.
Awesome, thanks. I agree it's unlikely to be noticeable.
4160734 to
32ab0a5
Compare
|
Now with benchmarks. I'm running this on an omnios vm on dogfood, running clickhouse v23. I copied over the field tables from dogfood, but no measurement tables. To benchmark, check out main, then collect some observations: Then check out this branch and run again: Then compare with Overall, my qualitative summary is that changing from JOIN to UNION helps more as the number of field tables per metric increases, and less as timeseries cardinality increases, but never makes queries slower than the status quo. |
Joins are expensive in clickhouse. In #9262, we dropped the number of joins in the field lookup query from the number of fields to the number of relevant field tables. However, this still leaves us with up to six field tables to join. In this patch, we do away with joins for the field lookup query entirely. Instead, we select the relevant rows from each field table, then combine them with UNION ALL, and use anyIf and GROUP BY to pivot from long format to wide. This speeds up the field query with indentical results, with greater speedups for timeseries that use more field tables. A timeseries whose labels are all strings will see no difference in performance; a series that has string, uuid, u8, u16, u32, and i16 labels (like the switch_port_control_data_link metrics) will return results much faster.
32ab0a5 to
a435519
Compare
|
This is great! Thanks for adding the benchmarking data. I agree that this seems no worse, and often much better than We don't have to do it now, but it would be great to eventually add a sweep over the number of unique timeseries in the result. I'm specifically interested the case when the cardinality is high, but the query is highly selective, so that we should be pulling out a very small amount of data from the joined / unioned field tables. This looks great, thanks! |
Add oxql field lookup benchmark. To avoid the complication of generating realistic synthetic data, we provide scripts for fetching and loading fields from a running ClickHouse instance. Checking in to benchmark changes in #10110.
Joins are expensive in clickhouse. In #9262, we dropped the number of joins in the field lookup query from the number of fields to the number of relevant field tables. However, this still leaves us with up to six field tables to join. In this patch, we do away with joins for the field lookup query entirely. Instead, we select the relevant rows from each field table, then combine them with UNION ALL, and use anyIf and GROUP BY to pivot from long format to wide. This speeds up the field query with indentical results, with greater speedups for timeseries that use more field tables. A timeseries whose labels are all strings will see no difference in performance; a series that has string, uuid, u8, u16, u32, and i16 labels (like the switch_port_control_data_link metrics) will return results much faster.