[CALCITE-7448] Add support for ':' variant path access syntax#4844
[CALCITE-7448] Add support for ':' variant path access syntax#4844tmater wants to merge 14 commits intoapache:mainfrom
Conversation
|
I might wait until the CI issue is resolved before reviewing this PR. |
|
I don't understand Calcite JavaCC code very well, I might need to learn it. If no one reviews it this week, I'll start working on it next week. |
Thank you for taking a look, @caicancai. I agree this is still a fairly complex PR, and I wouldn’t claim to fully understand all of its implications yet. That’s also why I tried to include a wide range of test variations and keep the changes gated behind a conformance config. I’ve organized it into five commits (prior to the CI failures) to make the review easier. If it would help further, I’m happy to split it into 4–5 smaller PRs. |
|
Is the behavior with |
@mihaibudiu , good catch! The parenthesis requirement itself is not new, on However, you're right that something was off: the bracket refactor broke parsing |
| [ | ||
| LOOKAHEAD(2, <COLON> SimpleIdentifier(), | ||
| { this.conformance.isColonFieldAccessAllowed() }) | ||
| <COLON> |
There was a problem hiding this comment.
Why is chained colon access not supported?
There was a problem hiding this comment.
I kept : intentionally non-chainable here. After one : we are already back in the existing postfix world, so the follow-up access can be expressed with the normal operators:
a:b.cinstead ofa:b:ca:['x'].yinstead ofa:['x']:y(a:b)['x']/a:b['x']instead ofa:b:['x']
So a second : would mostly be extra surface syntax, not extra expressiveness. Keeping it to a single colon also keeps the grammar narrower and avoids widening the ambiguous space around other colon uses, especially :: in Babel and JSON constructor : handling.
If we decide later that repeated : is a real dialect requirement, we can extend the current [...] to a loop, but I wanted to start with the minimal syntax that covers the targeted cases.
There was a problem hiding this comment.
Does this cover the Databricks and Snowflake capabilities?
There was a problem hiding this comment.
Yes, for the subset this patch is targeting, it covers the common Snowflake/Databricks colon-path surface.
Snowflake is proprietary, I could only reverse engineer/check docs. Examples in the docs are src:salesperson.name, src:vehicle[0].price, and src:vehicle[0].price::NUMBER, which matches the syntax this change adds. But experimented with it and s:payload.score::INT works too.
For Databricks, the public docs and the open Spark grammar match this implementation well. Spark treats : as semi-structured field access and :: as a separate cast operator. That matches the forms added here, such as v:field, v:['field'], arr[1]:field, v:field[1], and v:field::int.
Reference: SqlBaseParser.g4#L1316
| s.pos())); | ||
| list.add(dt); | ||
| } | ||
| AddRegularPostfixes(list) |
There was a problem hiding this comment.
I'm not sure if it will affect existing semantics.
There was a problem hiding this comment.
Before the refactor, InfixCast could stop after DataType() and rely on the outer Expression2() loop to notice any following . or [...]. After the refactor, that outer postfix branch was removed and the shared postfix handling moved earlier into AddExpression2b(). But Babel InfixCast is not part of that earlier path; it is an extraBinaryExpressions hook that runs later.
So if InfixCast does not invoke AddRegularPostfixes() itself, nobody else will. That is why it has to “own” it now: not because the semantics changed, but because the control flow changed. The shared postfix parser still exists, but this is now the only place on the Babel :: path where it can be called.
|
@tmater Thank you for following up. I left a question on Jira. Could you answer it when you have time? |
|
I was on vacation, I plan to review this PR again. |
|
|
||
| // Multiple brackets bind to the type | ||
| sql("select v::varchar[1][2] from t") | ||
| .ok("SELECT `V` :: VARCHAR[1][2]\nFROM `T`"); |
There was a problem hiding this comment.
But VARCHAR[1] is not a type. What does this mean?
There was a problem hiding this comment.
This is parser coverage rather than a meaningful type assertion. The test is checking that after ::, the parser still accepts postfix syntax on the RHS and groups it there, rather than attaching the postfix to the cast result. I agree the old example made that intent unclear, so I updated the test to make the associativity point clearer.
| // Bracket after :: binds to the type, not as subscript on the cast result | ||
| sql("select v::varchar array[1] from t") | ||
| .ok("SELECT `V` :: VARCHAR ARRAY[1]\nFROM `T`"); | ||
| f.sql("select v::varchar array[1] from t") |
There was a problem hiding this comment.
I still don't understand what the parse tree of this expression is.
I think the parser should actually reject this expression.
There was a problem hiding this comment.
I see your point, this is quite ambiguous. The reason I added these cases was mainly to show that we did not introduce regressions while touching the INFIX_CAST grammar.
For v::varchar array[1], the tree is something like:
INFIX_CAST(
v,
ITEM(SqlDataTypeSpec(VARCHAR ARRAY), 1)
)
So the [1] is attaching on the type side, not as a subscript on the cast result.
Also, I backported these tests onto the base of this PR as proof: #4885. The same behavior is already present there, so this is not something introduced by the colon-field-access change.
Would you prefer fixing this ambiguity in this PR, or filing a Jira to clean it up separately? I am leaning towards a separate PR and polishing these tests a bit, maybe leaving only one ambiguous expression test just in case.
There was a problem hiding this comment.
Ok, if the behavior does not change, it's fine. If we think there's a bug in the parser, where it produces a nonsensical parse tree, we should probably file an issue which can be solved separately.
There was a problem hiding this comment.
Filed CALCITE-7475 to cover this part and narrowed down the test surface.
| ] | ||
| } | ||
|
|
||
| void AddBracketAccess(List<Object> list) : |
There was a problem hiding this comment.
please add a comment describing what this is expected to parse; I know this code has been moved, but since the context of the caller is missing, the comment will help maintainers.
| [ | ||
| LOOKAHEAD(2, <COLON> SimpleIdentifier(), | ||
| { this.conformance.isColonFieldAccessAllowed() }) | ||
| <COLON> |
There was a problem hiding this comment.
Does this cover the Databricks and Snowflake capabilities?
| [ | ||
| LOOKAHEAD(2, <COLON> SimpleIdentifier(), | ||
| { this.conformance.isColonFieldAccessAllowed() }) | ||
| <COLON> |
There was a problem hiding this comment.
Yes, for the subset this patch is targeting, it covers the common Snowflake/Databricks colon-path surface.
Snowflake is proprietary, I could only reverse engineer/check docs. Examples in the docs are src:salesperson.name, src:vehicle[0].price, and src:vehicle[0].price::NUMBER, which matches the syntax this change adds. But experimented with it and s:payload.score::INT works too.
For Databricks, the public docs and the open Spark grammar match this implementation well. Spark treats : as semi-structured field access and :: as a separate cast operator. That matches the forms added here, such as v:field, v:['field'], arr[1]:field, v:field[1], and v:field::int.
Reference: SqlBaseParser.g4#L1316
| ] | ||
| } | ||
|
|
||
| void AddBracketAccess(List<Object> list) : |
| // Bracket after :: binds to the type, not as subscript on the cast result | ||
| sql("select v::varchar array[1] from t") | ||
| .ok("SELECT `V` :: VARCHAR ARRAY[1]\nFROM `T`"); | ||
| f.sql("select v::varchar array[1] from t") |
There was a problem hiding this comment.
Filed CALCITE-7475 to cover this part and narrowed down the test surface.
| ext = RowExpressionExtension() { | ||
| list.add( | ||
| new SqlParserUtil.ToTreeListItem( | ||
| SqlStdOperatorTable.DOT, getPos())); |
There was a problem hiding this comment.
I ran into this downstream while building on top of this work. It gets hard later to tell whether the original syntax was : or ., since : is rewritten to DOT / ITEM immediately, so I think a dedicated follow-up makes sense there. I’d still prefer to keep this PR scoped to the parser change, and I opened CALCITE-7476 for the next step.
| /** Parses postfixes that are allowed after colon field access, for example | ||
| * {@code v:field.nested} or {@code v:[1]['x']}. Unlike regular postfixes, | ||
| * this path does not allow member-style calls such as {@code v:field.func()}. */ | ||
| void AddColonPostfixes(List<Object> list) : |
There was a problem hiding this comment.
I added a separate postfix method for colon access because this came up during downstream review. Reusing the regular postfix path was too permissive, since it would also allow the same member-style calls as dot access, while for : we only want path continuation with . and []; Spark/Databricks blocks that broader dot-style behavior as well.
|
Can you even parse the expression if you don't have a colon operator? |
Yes, it is usable as-is. The current parser implementation lowers I chose this shape intentionally to keep the initial upstream parser change small and unblock the syntax problem first, rather than mixing it with a broader AST/operator change in the same step. I’d be happy to follow up with adding the operator as a next step. |
|
But isn't the right solution to first introduce a colon operator and just parse into that? |
: as a field/item access operator
I think that is a reasonable follow-up direction, but this PR is intentionally parser-scoped: accept |
| LOOKAHEAD(2) <DOT> | ||
| p = SimpleIdentifier() { | ||
| list.add( | ||
| new SqlParserUtil.ToTreeListItem( |
There was a problem hiding this comment.
So is the plan in a subsequent PR to add a Colon operator and replace this with colon?
Then I would suggest using the colon from the start. You don't want people to take a dependence on this parse tree and to expect a dot. Why change something if you can do it right from the start?
IMO it's understandable, as a new contributor, to try and keep the changes minimal (or whatever will decrease friction), but since we are happy with having the final solution from the beginning, no need for doing it in two steps. |
…acket refactor The branch moved bracket ([...]) and dot (.) postfix handling out of Expression2()'s loop and into AddRegularPostfixes (called from AddExpression2b). Babel's InfixCast production was implicitly relying on Expression2()'s loop to consume postfixes after the DataType() call, so after the refactor, expressions like v::varchar[1] would fail to parse because no grammar rule picked up the trailing [1]. Fix: call AddRegularPostfixes(list) after DataType() in InfixCast, making the postfix consumption explicit rather than relying on the caller's loop. Also consolidate InfixCast test coverage: remove the now-redundant testInfixCastBracketAccessNeedsParentheses and extend testColonFieldAccessWithInfixCast with comprehensive cases covering bracket access, dot access, parenthesized forms, and array types—both with and without isColonFieldAccessAllowed.
Tighten Babel infix-cast parser coverage around postfix binding after '::' and rename the test to match its scope. Restrict postfix continuation after ':' to field and bracket path segments so member-style calls such as 'v:field.func()' are no longer accepted. Add parser coverage for the rejected colon cases and explanatory comments for the shared postfix helper productions in Parser.jj.
8b64deb to
ca9ada3
Compare
|
Thanks for the review and sorry for the delay, @mihaibudiu and @asolimando. Yes, I wanted to keep the change small and build the feature iteratively. It helps me focus on small milestones, and I think it also makes the review easier. The branch was a month old, so I figured a rebase was overdue, especially given the direction change. The design is no longer a simple rewrite of PTAL when you have a moment, happy to iterate further. |
mihaibudiu
left a comment
There was a problem hiding this comment.
I don't see a convertlet for this operator.
Don't you plan to add one to the standardConvertletTable?
| * character literals (bracketed string keys), or numeric literals (bracketed | ||
| * indexes). | ||
| * | ||
| * <p>Calcite does not lower this operator to Rex. Engines must register a |
There was a problem hiding this comment.
Frankly, the way it's used is not the business of the class.
The JavaDoc should describe just what this class does, not what other classes do with it. If the other classes change, this comment will become obsolete.
This comment should probably be in the convertlet table if at all.
There was a problem hiding this comment.
Agreed, dropped the convertlet/engine paragraphs. The class Javadoc now describes only the operator and its operand shape:
| } | ||
|
|
||
| // Returns nullable ANY: path access can yield NULL on a missing key, and | ||
| // the concrete type is up to the engine's convertlet. |
There was a problem hiding this comment.
You again assume something about the convertlet table.
This type seems overly general, can't you compute the type of a colon expression from the types of its operands?
There was a problem hiding this comment.
Good call. Looking at how this is used in practice (Databricks/Snowflake), the result of a colon path is always semi-structured, Databricks specifically types it as VARIANT. For our purposes that's enough to commit to "result is always nullable VARIANT" and reject non-VARIANT bases up front. Updated deriveType accordingly
The same VARIANT-in/VARIANT-out shape already exists in SqlItemOperator.inferReturnType and SqlDotOperator.deriveType, so we're consistent with how Calcite handles variant elsewhere.
| * are valid. In this mode, JSON constructors must use the | ||
| * {@code KEY ... VALUE} form rather than {@code :} or comma-pair syntax. | ||
| * | ||
| * <p>No built-in conformance level enables this; engines that want |
There was a problem hiding this comment.
Again you describe in JavaDoc what other classes do.
I think you should only refer to what is happening here.
There was a problem hiding this comment.
Done, dropped the references to convertlets and engine-side decisions.
SqlColonOperator.deriveType requires the base to be VARIANT or ANY and returns nullable VARIANT, mirroring the existing variant handling in SqlItemOperator and SqlDotOperator. StandardConvertletTable lowers COLON to a left-folded chain of ITEM calls; engines targeting different lowerings register their own convertlet for SqlKind.COLON. Also tightens Javadocs on SqlColonOperator, SqlKind.COLON, the SqlStdOperatorTable entry, and SqlConformance#isColonFieldAccessAllowed, relocates the operator declaration next to ITEM, drops the allowsDoubleColonInColonFieldAccessMode test hook, and adds SqlToRelConverterTest coverage that pins the COLON-to-ITEM lowering shape.
tmater
left a comment
There was a problem hiding this comment.
I don't see a convertlet for this operator.
Don't you plan to add one to the standardConvertletTable?
Originally I left it out, wasn't sure what the right lowering should be, and felt like an okay boundary to let the engines register their own, but you're right that without a default the operator just hangs in the air. Added one in StandardConvertletTable that lowers COLON(base, [seg1, seg2, ...]) to a chain of ITEM calls (ITEM over DOT because it already handles VARIANT in/out, and DOT can't express integer indexes).
| } | ||
|
|
||
| // Returns nullable ANY: path access can yield NULL on a missing key, and | ||
| // the concrete type is up to the engine's convertlet. |
There was a problem hiding this comment.
Good call. Looking at how this is used in practice (Databricks/Snowflake), the result of a colon path is always semi-structured, Databricks specifically types it as VARIANT. For our purposes that's enough to commit to "result is always nullable VARIANT" and reject non-VARIANT bases up front. Updated deriveType accordingly
The same VARIANT-in/VARIANT-out shape already exists in SqlItemOperator.inferReturnType and SqlDotOperator.deriveType, so we're consistent with how Calcite handles variant elsewhere.
| * character literals (bracketed string keys), or numeric literals (bracketed | ||
| * indexes). | ||
| * | ||
| * <p>Calcite does not lower this operator to Rex. Engines must register a |
There was a problem hiding this comment.
Agreed, dropped the convertlet/engine paragraphs. The class Javadoc now describes only the operator and its operand shape:
| * are valid. In this mode, JSON constructors must use the | ||
| * {@code KEY ... VALUE} form rather than {@code :} or comma-pair syntax. | ||
| * | ||
| * <p>No built-in conformance level enables this; engines that want |
There was a problem hiding this comment.
Done, dropped the references to convertlets and engine-side decisions.
mihaibudiu
left a comment
There was a problem hiding this comment.
Please document the new operator in site/_docs/reference.md
| final RelDataType baseType = | ||
| requireNonNull(validator.deriveType(scope, call.operand(0))); | ||
| final SqlTypeName name = baseType.getSqlTypeName(); | ||
| if (name != SqlTypeName.VARIANT && name != SqlTypeName.ANY) { |
There was a problem hiding this comment.
Databricks supports string types here too.
It's fine not to support them, but is this your intent?
There was a problem hiding this comment.
It looks like some of this code belongs in operandTypeChecker instead.
The return type is just the operand 0 type.
There was a problem hiding this comment.
Good point, created OperandTypes.COLON.
On STRING handling, VARIANT is intentional for this PR. The API is kept simple enough that STRING can be added later by extending the existing operand checker and convertlet. Happy to do this in a follow-up Jira.
| } | ||
|
|
||
| @Override public String getAllowedSignatures(String name) { | ||
| return "<A>:<PATH>"; |
There was a problem hiding this comment.
I don't know what is, it should be a type name.
<VARIANT> for sure, <CHAR> if you decide that's acceptable too.
There was a problem hiding this comment.
Whops, fixed it, also moved this part to OperandTypes.
…t operator Move type-check, count, and signature into OperandTypes.COLON. Document the operator in site/_docs/reference.md.
|



Changes Proposed
This PR adds opt-in support for
:as a field/item access operator behind a newSqlConformance#isColonFieldAccessAllowed()hook. No built-in conformance enables it, so the default parser behavior is unchanged.The parser preserves
:as a dedicatedSqlColonOperator(SqlKind.COLON) in the SqlNode tree. The call has the shape(base, SqlNodeList<segments>), where each segment is an identifier (dot-notation), a string literal (bracket key) or an integer literal (bracket index).The validator requires the base to be
VARIANT(orANY) and types the result as nullableVARIANT, mirroring the existing variant handling inSqlItemOperatorandSqlDotOperator.StandardConvertletTablelowersCOLON(base, [seg1, seg2, ...])to a left-folded chain ofITEMcalls. Engines that want different semantics (e.g.VARIANT_GET) register their own convertlet forSqlKind.COLON.Supported forms include
v:field,v:field.nested,v:['field name'],v:[0],arr[1]:field,obj['x']:nested['y'], and mixed chains such asv:a.b['c'][0].To avoid grammar ambiguity when colon mode is enabled,
JSON_OBJECTandJSON_OBJECTAGGmust use theKEY ... VALUEform; the'k': vand'k', vshorthands are rejected.Reproduction
Testing
Parser coverage in
SqlParserTestandBabelParserTestfor valid and invalid colon paths, JSON constructor disambiguation, and::cast interactions in Babel. Validator coverage inSqlValidatorTestfor VARIANT typing, non-variant base rejection, and base-column resolution. Sql2rel coverage inSqlToRelConverterTestasserting the lowering shape.Jira Link
CALCITE-7448