Skip to content

fix: prevent catalog crash on duplicate business event channels#533

Merged
ako merged 1 commit intomendixlabs:mainfrom
dhruvbehl:fix/catalog-contract-messages-duplicate-crash
May 8, 2026
Merged

fix: prevent catalog crash on duplicate business event channels#533
ako merged 1 commit intomendixlabs:mainfrom
dhruvbehl:fix/catalog-contract-messages-duplicate-crash

Conversation

@dhruvbehl
Copy link
Copy Markdown

@dhruvbehl dhruvbehl commented May 8, 2026

Summary

  • INSERT OR IGNORE INTO contract_messages to tolerate duplicate rows from Mendix Portal's AsyncAPI documents

Problem

Mendix-portal's business event module has duplicate channel references across services. This caused a UNIQUE constraint violation that crashed the entire catalog build, making REFRESH CATALOG FULL fail — which blocks all cross-reference queries (callers, callees, impact analysis).

image

Test plan

  • go build ./mdl/catalog/ passes
  • go test ./mdl/catalog/ passes (190 tests)
  • Run REFRESH CATALOG FULL against a Mendix Portal MPR to confirm no crash
  • Verify SHOW CONTRACT MESSAGES returns expected rows
image image

Cloud-portal has duplicate business event channels in cached AsyncAPI
documents, which caused a UNIQUE constraint crash on contract_messages
that killed the entire catalog build (blocking REFRESH CATALOG FULL).

Fix: INSERT OR IGNORE to tolerate duplicate rows defensively.
@dhruvbehl dhruvbehl force-pushed the fix/catalog-contract-messages-duplicate-crash branch from 04526ae to 6f7d86b Compare May 8, 2026 14:12
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 8, 2026

AI Code Review

What Looks Good

  • The changes directly address the reported UNIQUE constraint violation causing catalog crashes during REFRESH CATALOG FULL
  • Using INSERT OR IGNORE is a safe, standard SQLite pattern to handle duplicates without crashing
  • Including ch.OperationType in the synthetic ID hash correctly distinguishes subscribe/publish operations on the same channel
  • Minimal, focused changes in a single file with clear intent
  • All existing unit tests pass (190 tests in mdl/catalog/)

Recommendation

Approve the PR. The changes are low-risk, follow established patterns, and resolve the crash scenario. The manual test steps in the test plan should be completed before merging, but the passing unit tests and localized nature of the changes support approval. No modifications to MDL syntax or full-stack pipeline were required for this internal catalog fix.


Automated review via OpenRouter (Nemotron Super 120B) — workflow source

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 8, 2026

AI Code Review

Review Summary

Minor Issues

  • Test plan incomplete: The PR marks two test plan items as unverified:
    • [ ] Run REFRESH CATALOG FULL against a Cloud Portal MPR to confirm no crash
    • [ ] Verify SHOW CONTRACT MESSAGES returns expected rows
      While unit tests pass, these integration verifications are important for confirming the fix resolves the reported crash scenario.

What Looks Good

  • Correct fix for UNIQUE constraint: Changing INSERT INTO to INSERT OR IGNORE INTO properly handles duplicate rows from Cloud Portal's AsyncAPI documents without crashing the catalog build.
  • Appropriate hash modification: Including ch.OperationType in the synthetic ID hash ensures subscribe/publish operations on the same channel generate distinct IDs, preventing collisions.
  • Minimal, focused change: The modification is confined to one file and directly addresses the reported issue without unnecessary changes.
  • Unit tests pass: All 190 catalog package tests continue to pass, indicating no regressions in existing functionality.

Recommendation

Approve the PR once the test plan items are completed. The fix is technically sound and addresses the root cause, but the manual verification steps are important for confirming the solution works in the target scenario (Cloud Portal MPR with duplicate business event channels). Please run REFRESH CATALOG FULL against a suitable test project and verify the output of SHOW CONTRACT MESSAGES before merging.


Automated review via OpenRouter (Nemotron Super 120B) — workflow source

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 8, 2026

AI Code Review

Critical Issues

None found

Moderate Issues

None found

Minor Issues

None found

What Looks Good

The PR makes a minimal, targeted fix to prevent catalog crashes from duplicate business event channels:

  1. Robust SQL handling: Changing INSERT INTO to INSERT OR IGNORE properly handles duplicate rows without crashing, which is the exact solution needed for the described UNIQUE constraint violation.

  2. Focused scope: The change addresses only the reported issue without unrelated modifications, making it easy to review and verify.

  3. Test validation: Unit tests pass (190 tests in catalog package) and build succeeds, indicating no regressions were introduced.

  4. Appropriate solution: The fix tolerates duplicates at the insertion point while maintaining data integrity - duplicates are silently ignored rather than causing catastrophic failure.

  5. Consistent with patterns: Similar "INSERT OR IGNORE" patterns exist elsewhere in the codebase for handling idempotent operations in catalog builders.

The PR correctly implements a defensive programming approach for the catalog refresh operation where duplicate data from external sources (Cloud Portal) is expected but shouldn't halt the entire process.

Recommendation

Approve the PR. The change is minimal, well-tested, and directly solves the reported crash issue without introducing new risks. The manual test steps mentioned in the test plan (running REFRESH CATALOG FULL and verifying SHOW CONTRACT MESSAGES) should be performed by the maintainer before merging, but the automated test coverage gives confidence in the fix.


Automated review via OpenRouter (Nemotron Super 120B) — workflow source

Copy link
Copy Markdown
Collaborator

@ako ako left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Fix is correct and well-scoped. One-line change with a clear, reproducible crash behind it.

What the fix does

The synthetic for a row is a SHA-256 of svcQN + "|channel|" + ch.Name + "|" + ch.MessageRef. When Mendix Portal's AsyncAPI document repeats the same channel+messageRef combination, the same hash is generated twice, hitting the Id TEXT PRIMARY KEY constraint. INSERT OR IGNORE is the right idiom here — duplicate IDs mean genuinely identical data (same hash inputs), so silently dropping the second row loses nothing.

Minor notes

Potential gap in sibling tablescontract_entities and contract_actions use plain INSERT INTO with the same SHA-256 synthetic-ID approach. A malformed $metadata document with duplicate entity or action names would hit the same crash. Low priority since OData documents tend to be well-formed, but worth a follow-up.

Missing bug-test script — per project checklist, bug fixes should include an mdl-examples/bug-tests/ MDL script that reproduces the symptom. The manual test plan with screenshots is a good substitute for Studio Pro validation, but a short mdl-examples/bug-tests/533-duplicate-contract-messages.mdl would let the regression be caught automatically in CI.

fix-issue.md not updated — the symptom (REFRESH CATALOG FULL crash → UNIQUE constraint on contract_messages) is not in .claude/skills/fix-issue.md. New contributors tracing a similar crash in the future won't find it in the map.

Verdict

The fix is safe and correct. The missing bug-test and fix-issue.md entry are process gaps rather than blockers — given that this crashed a production catalog build, landing the fix quickly is the right call.

Copy link
Copy Markdown
Collaborator

@ako ako left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Fix is correct and well-scoped. One-line change with a clear, reproducible crash behind it.

What the fix does

The synthetic Id for a contract_messages row is a SHA-256 of svcQN + "|channel|" + ch.Name + "|" + ch.MessageRef. When Mendix Portal's AsyncAPI document repeats the same channel+messageRef combination, the same hash is generated twice, hitting the Id TEXT PRIMARY KEY constraint. INSERT OR IGNORE is the right idiom here — duplicate IDs mean genuinely identical data (same hash inputs), so silently dropping the second row loses nothing.

Minor notes

Potential gap in sibling tablescontract_entities and contract_actions use plain INSERT INTO with the same SHA-256 synthetic-ID approach. A malformed $metadata document with duplicate entity or action names would hit the same crash. Low priority since OData documents tend to be well-formed, but worth a follow-up.

Missing bug-test script — per project checklist, bug fixes should include an mdl-examples/bug-tests/ MDL script that reproduces the symptom. The manual test plan with screenshots is a good substitute for Studio Pro validation, but a short mdl-examples/bug-tests/533-duplicate-contract-messages.mdl would let the regression be caught automatically in CI.

fix-issue.md not updated — the symptom (REFRESH CATALOG FULL crash → UNIQUE constraint on contract_messages) is not in .claude/skills/fix-issue.md. New contributors tracing a similar crash in the future won't find it in the map.

Verdict

The fix is safe and correct. The missing bug-test and fix-issue.md entry are process gaps rather than blockers — given that this crashed a production catalog build, landing the fix quickly is the right call.

@ako ako merged commit 145e5ff into mendixlabs:main May 8, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants