Skip to content

#847 Fix array flattening in Spark 4+.#848

Open
yruslan wants to merge 1 commit intomasterfrom
bugfix/847-fix-schema-flattenning-on-spark4
Open

#847 Fix array flattening in Spark 4+.#848
yruslan wants to merge 1 commit intomasterfrom
bugfix/847-fix-schema-flattenning-on-spark4

Conversation

@yruslan
Copy link
Copy Markdown
Collaborator

@yruslan yruslan commented May 7, 2026

Summary by CodeRabbit

  • Bug Fixes
    • Resolved array handling compatibility with Apache Spark 4+ while maintaining support for legacy Spark versions.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 7, 2026

Review Change Stack

Walkthrough

flattenSchema's array-handling logic is updated to support Spark 4+ get(path, index) function semantics. A version check introduces a helper function (getArrayIndexExpr) that generates correct SQL expressions for array element access, then applies this helper consistently when flattening arrays inside structs and nested arrays.

Changes

Spark 4+ Array Indexing Compatibility

Layer / File(s) Summary
Version Check & Helper Function
spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala
Spark session version check (isUseArrayGet) and getArrayIndexExpr helper introduced to generate get(path, index) expressions for Spark 4+ or bracket indexing path[index] for earlier versions.
Struct Array Flattening
spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala
flattenStructArray updated to use version-aware getArrayIndexExpr for constructing array element field expressions, with corresponding adjustments to field/index composition and stringFields generation.
Nested Array Flattening
spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala
flattenNestedArrays updated to use version-aware array element access when recursing into nested structures and when generating primitive projections and their stringFields expressions.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related issues

  • flattenSchema(df) with Spark 4.0 #847: The array indexing compatibility changes directly address Spark 4 index-out-of-bounds errors in the flattenSchema utility by switching to get(path, index) semantics for Spark 4+.

Poem

🐰 Spark marches forward with version four's might,
Arrays need indexing—bracket or get, which is right?
A helper decides, with wisdom and care,
Flat schemas bloom wherever they dare,
Cross-version harmony fills the morning air!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: fixing array flattening functionality for Spark 4+ compatibility.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch bugfix/847-fix-schema-flattenning-on-spark4

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala (1)

198-203: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

path.contains('[') log-path selector misses Spark 4 get(…) expressions.

In flattenGroup, when a path arrives from flattenStructArray or flattenNestedArrays on Spark 4, it will look like get(parentfield, 0). — no [ present — so path.contains('[') evaluates to false and the log entry uses col(...) syntax even though the path is a SQL expression that col cannot evaluate. This only affects the stringFields log output (line 209), not actual execution, but the logged "flattening code" becomes incorrect/misleading for Spark 4 paths.

🔧 Proposed fix
-          if (path.contains('['))
+          if (path.contains('[') || path.contains('('))
             stringFields += s"""expr("$path`${field.name}` AS `$newFieldName`")"""
           else
             stringFields += s"""col("$path`${field.name}`").as("$newFieldName")"""
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala`
around lines 198 - 203, The log-path selector in flattenGroup currently checks
only path.contains('[') so Spark 4 SQL-style paths like get(parent`field`, 0)
are misclassified; update the condition that decides whether to append an
expr(...) vs col(...) to stringFields to also detect SQL/get-style expressions
(for example check path.contains("get(") or otherwise detect '(' in the path)
and use the expr(...) branch for those cases; locate the logic in flattenGroup
where stringFields is appended (related symbols: flattenGroup,
flattenStructArray, flattenNestedArrays, path, stringFields, fields,
getNewFieldName) and change the conditional so Spark 4 get(...) expressions are
logged with expr(...) rather than col(...).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In
`@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala`:
- Around line 129-133: In flattenStructArray's primitive-array branch the actual
Column added to fields still uses bracket indexing
(expr(s"$path`${structField.name}`[$i]")) which fails on Spark 4; change the
Column expression to use getArrayIndexExpr(fieldName, i) like stringFields does
so fields and stringFields are consistent; update the fields += expr(...) call
inside flattenStructArray to build the Column via
expr(getArrayIndexExpr(fieldName, i)) and keep the existing
.as(getNewFieldName(...), structField.metadata) so the selection matches the
logger output.

---

Outside diff comments:
In
`@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala`:
- Around line 198-203: The log-path selector in flattenGroup currently checks
only path.contains('[') so Spark 4 SQL-style paths like get(parent`field`, 0)
are misclassified; update the condition that decides whether to append an
expr(...) vs col(...) to stringFields to also detect SQL/get-style expressions
(for example check path.contains("get(") or otherwise detect '(' in the path)
and use the expr(...) branch for those cases; locate the logic in flattenGroup
where stringFields is appended (related symbols: flattenGroup,
flattenStructArray, flattenNestedArrays, path, stringFields, fields,
getNewFieldName) and change the conditional so Spark 4 get(...) expressions are
logged with expr(...) rather than col(...).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 89b865c8-138f-40cc-808c-24df907d5622

📥 Commits

Reviewing files that changed from the base of the PR and between 672bde3 and 4518dea.

📒 Files selected for processing (1)
  • spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala

Comment on lines 129 to +133
case _ =>
val newFieldNamePrefix = s"${fieldNamePrefix}${i}"
val newFieldName = getNewFieldName(s"$newFieldNamePrefix")
fields += expr(s"$path`${structField.name}`[$i]").as(newFieldName, structField.metadata)
stringFields += s"""expr("$path`${structField.name}`[$i] AS `$newFieldName`")"""
stringFields += s"""expr("${getArrayIndexExpr(fieldName, i)} AS `$newFieldName`")"""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

fields still uses bracket indexing for primitive arrays — the Spark 4 fix is incomplete here.

In flattenStructArray's primitive branch (line 132), only stringFields (the log representation) was updated to use getArrayIndexExpr, but the actual Column added to fields — which is what df.select executes — still uses bracket indexing:

fields += expr(s"$path`${structField.name}`[$i]").as(newFieldName, structField.metadata)

On Spark 4 this will throw [INVALID_ARRAY_INDEX] exactly as the comment above describes, because fields is what gets evaluated during execution. stringFields is only used for the logger.info call at line 209.

Compare with the correctly-fixed flattenNestedArrays at line 154, where fields itself uses getArrayIndexExpr.

🐛 Proposed fix
         case _ =>
           val newFieldNamePrefix = s"${fieldNamePrefix}${i}"
           val newFieldName = getNewFieldName(s"$newFieldNamePrefix")
-          fields += expr(s"$path`${structField.name}`[$i]").as(newFieldName, structField.metadata)
+          fields += expr(s"${getArrayIndexExpr(fieldName, i)}").as(newFieldName, structField.metadata)
           stringFields += s"""expr("${getArrayIndexExpr(fieldName, i)} AS `$newFieldName`")"""
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala`
around lines 129 - 133, In flattenStructArray's primitive-array branch the
actual Column added to fields still uses bracket indexing
(expr(s"$path`${structField.name}`[$i]")) which fails on Spark 4; change the
Column expression to use getArrayIndexExpr(fieldName, i) like stringFields does
so fields and stringFields are consistent; update the fields += expr(...) call
inside flattenStructArray to build the Column via
expr(getArrayIndexExpr(fieldName, i)) and keep the existing
.as(getNewFieldName(...), structField.metadata) so the selection matches the
logger output.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 7, 2026

JaCoCo code coverage report - 'cobol-parser'

Overall Project 91.05% 🍏

There is no coverage information present for the Files changed

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 7, 2026

JaCoCo code coverage report - 'spark-cobol'

Overall Project 83.4% -0.7% 🍏
Files changed 71.43%

File Coverage
SparkUtils.scala 92.21% -9.92%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant