Skip to content

PARQUET-2249: Introduce IEEE 754 total order & NaN-counts#514

Open
JFinis wants to merge 2 commits intoapache:masterfrom
JFinis:totalorder_and_nancounts
Open

PARQUET-2249: Introduce IEEE 754 total order & NaN-counts#514
JFinis wants to merge 2 commits intoapache:masterfrom
JFinis:totalorder_and_nancounts

Conversation

@JFinis
Copy link
Contributor

@JFinis JFinis commented Aug 9, 2025

This commit is a combination of the following PRs:

Both these PRs try to solve the same problems; read the description of the respective PRs for explanation.

This PR is the result of an extended discussion in which it was repeatedly brought up that another possible solution to the problem could be the combination of the two approaches. Please refer to this discussion on the mailing list and in the two PRs for details. the mailing list discussion can be found here:
https://lists.apache.org/thread/lzh0dvrvnsy8kvflvl61nfbn6f9js81s

The contents of this PR are basically a straightforward combination of the two approaches:

  • IEEE total order is introduced as a new order for floating point types
  • nan_count and nan_counts fields are added

Legacy writers may not write nan_count(s) fields,
so readers have to handle them being absent. Also, legacy writers may have included NaNs into min/max bounds, so readers also have to handle that.

As there are no legacy writers writing IEEE total order, nan_count(s) are defined to be mandatory if this order is used, so readers can assume their presense when this order is used.

This commit removes nan_pages from the ColumnIndex, which the nan_counts PR mandated. We don't need them anymore, as IEEE total order solves this: As this is a new order and there are thus no legacy writers and readers, we have the freedom to define that for only-NaN pages using this order, we can actually write NaNs into min & max bounds in this case, and readers can assume that isNaN(min) signals an only-NaN page.

Note: If this PR is chosen to be adopted, I will update the commit message to state the problem without just pointing onto the other PRs. But until then, I rather link to them instead of repeating myself, in case this PR isn't chosen anyway.

Rationale for this change

What changes are included in this PR?

Do these changes have PoC implementations?

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through the proposal and I left some stylistic comments which I think might help clarity, but are not required.

Thank you @JFinis for pushing this along

*
* When writing statistics the following rules should be followed:
* - NaNs should not be written to min or max statistics fields.
* - It is suggested to always set the nan_count field for floating
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heading says "the following rules should be followed" but the first one says "it is suggested" which seems to imply it is optional. I recommend removing the "suggested"

Copy link

@orlp orlp Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even 'should' is still optional per RFC2119. If this isn't optional, 'must' or 'shall' is the correct term. But this is a problem in more places in this document, so I take it you're not following that RFC.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the requirement should be to set nan_count, so I agree that this should not be a suggestion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole document uses "should", where "shall" or "must" would actually be more appropriate. Therefore, I was going with the general speak of this document for consistency. I totally agree though; it always bothered me a lot that the spec always uses "should", making it sounds as if all of this is just a mere suggestion and implementors can do what they want.

That being said, if we want to change this, I would rather do it in one PR that updates the whole document to stay consistent.

* must be followed:
* - Writing the nan_count field is mandatory when using this ordering,
* especialy also if it is zero.
* - NaNs should not be written to min or max statistics fields except
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the spec should specify exactly what should be written, not just rule out what shouldn't be. I assume you meant to specify here that the smallest/largest non-NaN value must be written instead?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think this rule (assuming I understood you correctly) is a bit overcomplicated? Not sure why we'd make the rules different depending or not whether it is a column index. May I propose the following instead which applies regardless of whether it is a column index?

When writing statistics for columns with this order, the following rules must be followed:

  • Min or max statistics if written must contain the smallest or largest non-NaN value respectively. If there are exclusively NaNs they must contain the smallest and largest NaN values respectively instead, as defined by the IEEE 754 total order.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we can also keep it the same for the column index and statistics here. My initial idea was to keep the behavior close to what we currently have where possible. But I agree that we can alos instead make it different from the current behavior and therefore more consistent between column index and statistics.

* anything else. It also defines an order between different bit representations
* of the same value.
*
* The formal definition is as follows:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to copy this here? Having a version here that duplicates the IEEE-754 version makes me worry that they will get out of sync or have errors due to copying. We should make it clear that the IEEE-754 version is correct if the two ever differ.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand what lines your comment is referring to. What is the part you see duplicated?

* * If IEEE754_TOTAL_ORDER is used for the column and all non-null values
* of a page are NaN, then min_values[i] and max_values[i] must be set to
* the smallest and largest NaN value contained in the page, as defined
* by the IEEE 754 total order.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this require actual bound for NaN rather than using a standard NaN value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intentaion was here that if we already use an order that orders NaNs in a well defined manner, we can also use it. I agree we could also settle for "any NaN values is okay in this case", but that is somewhat going against the spirit of IEEE total order.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW it is all these strange special cases that are why I still prefer the total order PR - nan_counts sound simple but in practice there be dragons.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @tustvold, I don't think the added complexity of this proposal is worth it given the meager added benefit. This proposal still means that query engines that do use IEEE total ordering will be unable to filter due to the lack of knowledge of the sign of the NaNs seen. This is the flip side of total ordering not quite working for engines that lump all NaNs together. So the only real benefit to this proposal is to not "poison" the statistics.

@JFinis
Copy link
Contributor Author

JFinis commented Aug 13, 2025

@tustvold @crepererum @westonpace I would especially like to get your opions on this, as you were proponents of total order for simplicity. Does this PR still cover what you wanted to get out of this?

@mapleFU @etseidl @gszadovszky @pitrou FYI, as you were part of the initial discussion on the previous PRs.

Copy link
Contributor

@tustvold tustvold left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I understanding correctly that the change to add nan_counts means that NaNs are now excluded from the min/max statistics? I believe this creates a new problem for engines that use total ordering as the sign of the NaNs is not known, and therefore there are scenarios where predicates can't be pushed down where they could be in the absence of nan_counts.

My 2 cents is that there probably is no perfect solution here, adding nan_counts is worse for some engines and better for some others, and as such my personal preference would still be to go with the simpler solution, that has existing implementations and can ship today.

The argument for nan_counts appears to be that the presence of a NaN has the effect of poisoning the statistics. I don't really follow why NaNs are special in this regard, any extreme value has the same effect, if people want to effectively push down such predicates they should be sorting so as to cluster data points of similar magnitude. It has been said that people coming from numpy use NaN instead of Null, but as was said on the mailing list, I don't think the onus should be on parquet to handle this, rather people should convert NaNs to Nulls if that is the semantic they want.

That being said if people are motivated to push ahead with this hybrid solution, I don't really feel strongly. Ultimately the issue is fixing the fact predicates can't be pushed down on floats in general, how that applies to NaNs IMO is kind of irrelevant to the vast majority of workloads. This has dragged on for 2 years, we'd almost reached consensus and I can't help feeling we've been startled by one dissenting opinion...

@orlp
Copy link

orlp commented Aug 13, 2025

@tustvold

Am I understanding correctly that the change to add nan_counts means that NaNs are now excluded from the min/max statistics? I believe this creates a new problem for engines that use total ordering as the sign of the NaNs is not known, and therefore there are scenarios where predicates can't be pushed down where they could be in the absence of nan_counts.

No, the predicates still can be pushed down, regardless of the semantics of your engine.

If your engine uses total ordering with predicate col > c you skip a page when the statistics indicate col_max <= c && nan_count == 0. If your engine uses partial ordering you simply check if col_max <= c.

My 2 cents is that there probably is no perfect solution here, adding nan_counts is worse for some engines and better for some others

It's not worse for any engine as far as I can see, a strict improvement for everyone in filtering capabilities.

I don't really follow why NaNs are special in this regard, any extreme value has the same effect

NaNs aren't extreme values for engines without total ordering, they are simply unordered.

@tustvold
Copy link
Contributor

tustvold commented Aug 13, 2025

I seem to remember from a previous discussion around the need to have signed NaNs to allow for predicate pushdown, but I can't remember the example.

Regardless my broader point still stands that we are adding a non-trivial amount of complexity, there is a lot of subtlety both at read and write time, to yield an improvement that to be brutally honest I am not really sure will benefit all that many real workloads. It's all tradeoffs, if people want to and are motivated to proceed with the hybrid approach I don't feel strongly, but I personally prefer the simpler option that is more likely to be implemented correctly across the many many parquet implementations. The proposal here is strictly more complex than the existing specification which implementations still implement incorrectly.

@orlp
Copy link

orlp commented Aug 13, 2025

I seem to remember from a previous discussion around the need to have signed NaNs to allow for predicate pushdown, but I can't remember the example.

Signed NaN counts are only useful to avoid treating a -NaN as a false positive for the col > c case for total ordering engines. To give a concrete example, if the page contains [-NaN, -3, 42] and the predicate is col > 100 then this page could be skipped by such an engine with statistics total_min = -NaN, total_max = 42 but not with min = -3, max = 42, nan_count = 1, because that singular nan could be positive NaN which would match +NaN > 100. Thus if signed nan counts are unavailable any NaN regardless of sign will have to be treated as an extremal value by those engines.

But only having a single NaN count doesn't block predicate pushdown whatsoever, and since you're fine with NaN poisoning anyway I don't think you'd care about the above -NaN edge case.

@JFinis
Copy link
Contributor Author

JFinis commented Aug 13, 2025

I seem to remember from a previous discussion around the need to have signed NaNs to allow for predicate pushdown, but I can't remember the example.

Regardless my broader point still stands that we are adding a non-trivial amount of complexity, there is a lot of subtlety both at read and write time, to yield an improvement that to be brutally honest I am not really sure will benefit all that many real workloads. It's all tradeoffs, if people want to and are motivated to proceed with the hybrid approach I don't feel strongly, but I personally prefer the simpler option that is more likely to be implemented correctly across the many many parquet implementations. The proposal here is strictly more complex than the existing specification which implementations still implement incorrectly.

Gut feeling wise, I agree with your assessment. I feel like I personally can implement this correctly in our engine, but I have now spent years of my life thinking about this, so I guess I'm not representative for the average Parquet maintainer. I also feel that this has such a high chance of being implemented incorrectly that the more straightforward solution of just IEEE total ordering would be preferrable. My gut feels like the risk of added implementation complexity and thus chance for wrong implementations is higher than the advantage of not having "NaN poisoning" of statistics.

@alamb
Copy link
Contributor

alamb commented Aug 15, 2025

I have filed a ticket to track trying to implement this in the arrow-rs implementation of Parquet, which will hopefully help us quantify how complicated this proposal is to implement:

@wgtmac
Copy link
Member

wgtmac commented Sep 4, 2025

@tustvold @crepererum @westonpace I would especially like to get your opions on this, as you were proponents of total order for simplicity. Does this PR still cover what you wanted to get out of this?

@mapleFU @etseidl @gszadovszky @pitrou FYI, as you were part of the initial discussion on the previous PRs.

I think we have waited enough time for feedback. Do you want to start a vote with two proposals on the dev ML, @JFinis?

JFinis and others added 2 commits February 11, 2026 14:12
This commit is a combination of the following PRs:

* Introduce IEEE 754 total order
  apache#221
* Add nan_count to handle NaNs in statistics
  apache#196

Both these PRs try to solve the same problems; read
the description of the respective PRs for explanation.

This PR is the result of an extended discussion in
which it was repeatedly brought up that another possible
solution to the problem could be the combination of
the two approaches. Please refer to this discussion
on the mailing list and in the two PRs for details.
the mailing list discussion can be found here:
https://lists.apache.org/thread/lzh0dvrvnsy8kvflvl61nfbn6f9js81s

The contents of this PR are basically a straightforward
combination of the two approaches:
* IEEE total order is introduced as a new order for floating
  point types
* nan_count and nan_counts fields are added

Legacy writers may not write nan_count(s) fields,
so readers have to handle them being absent. Also, legacy
writers may have included NaNs into min/max bounds, so readers
also have to handle that.

As there are no legacy writers writing IEEE total order,
nan_count(s) are defined to be mandatory if this order is used,
so readers can assume their presense when this order is used.

This commit removes `nan_pages` from the ColumnIndex,
which the nan_counts PR mandated. We don't need them anymore,
as IEEE total order solves this: As this is a new order and
there are thus no legacy writers and readers, we have the
freedom to define that for only-NaN pages using this order,
we can actually write NaNs into min & max bounds in this case,
and readers can assume that isNaN(min) signals an only-NaN
page.
@wgtmac wgtmac force-pushed the totalorder_and_nancounts branch from c2116b0 to 62f4dc8 Compare February 11, 2026 07:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants