Skip to content

Conversation

@rioliu-rh
Copy link
Contributor

@rioliu-rh rioliu-rh commented Feb 11, 2026

Summary

  • Add build script to compile the component-monitor binary from ship-status-dash and install to /usr/local/bin
  • Add component-monitor config to probe the MCP server health endpoint
  • Add systemd service file for running the component-monitor as a persistent service on the VM

Context

The SHIP Status Dashboard component-monitor on app.ci cannot reach the internal VM network where the ERT MCP server runs. This deploys a dedicated ert-component-monitor instance on the VM itself to probe the local health endpoint and report status to the dashboard.

Ref: OCPERT-324

Add build script, config, and systemd service for running a dedicated
component-monitor instance on the ERT VM to report health status of
ERT services to the SHIP Status Dashboard.

The component-monitor probes the MCP server health endpoint and reports
status to the dashboard API. A separate instance is needed because
the app.ci cluster cannot reach the internal VM network.
@openshift-ci
Copy link

openshift-ci bot commented Feb 11, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign rioliu-rh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@rioliu-rh
Copy link
Contributor Author

/cc @smg247

@openshift-ci openshift-ci bot requested a review from smg247 February 11, 2026 13:01
@@ -0,0 +1,25 @@
#!/bin/bash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The component-monitor is available as a public image at: quay.io. The latest tag is automatically updated whenever a change is made.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's great, I can extract the latest binary from the image, don't have to build it from the source code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The binary in the container image is dynamically linked and requires GLIBC_2.32, but RHEL8 ships with GLIBC 2.28. The binary was likely built on a newer base image.

/tmp/component-monitor: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /tmp/component-monitor)

I have Go 1.24.6 installed on RHEL8 vm, seems like I have to rollback image based logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update the service file to launch it in container directly. so this file is not needed

Use the pre-built component-monitor image from quay.io/openshiftci
instead of compiling from source. This removes the Go toolchain
dependency on the VM.
The pre-built container image from quay.io/openshiftci avoids the need
to install Go or extract the binary. The systemd service uses podman run
with the config and token mounted as volumes.
Remove User directive since cloud-user lacks rootless podman
permissions. System-level podman services run as root by default.
@rioliu-rh
Copy link
Contributor Author

Updated based on review feedback:

  • Removed build.sh - no longer needed
  • Updated systemd service to run the component-monitor directly as a container via podman run, mounting config and token as volumes
  • Runs as root since cloud-user doesn't have rootless podman permissions on RHEL8

The GLIBC incompatibility with extracting the binary is avoided entirely by running the container image as-is.

@openshift-ci
Copy link

openshift-ci bot commented Feb 11, 2026

@rioliu-rh: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@rioliu-rh
Copy link
Contributor Author

@coderabbitai

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

Tip

For best results, initiate chat on the files or code changes.

@rioliu-rh: Thank you for the update! I see you've made several improvements based on review feedback:

  • Removed the build.sh script to avoid GLIBC incompatibility issues
  • Updated the systemd service to run the component-monitor directly as a container using podman run
  • Configured volume mounts for config and token files
  • Set to run as root due to rootless podman permission constraints on RHEL8

How can I assist you? Would you like me to:

  • Review the current changes in the PR?
  • Look at specific files or configurations?
  • Help with something else?

Please let me know what you need!

@rioliu-rh
Copy link
Contributor Author

@coderabbitai full review

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

✅ Actions performed

Full review triggered.

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

Walkthrough

Two new deployment files are introduced: a YAML configuration for health monitoring with HTTP endpoint checks and retry policies, and a systemd service unit that orchestrates the component-monitor container with specified mounts, flags, and restart behavior.

Changes

Cohort / File(s) Summary
Component Monitor Deployment Infrastructure
deployment/component-monitor/config.yaml, deployment/systemd/ship-status-component-monitor.service
Introduces HTTP health monitoring configuration targeting mcp-server component and defines systemd service unit for container orchestration with config/token mounts, dashboard integration, and auto-restart policy.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
deployment/systemd/ship-status-component-monitor.service (2)

10-18: Hardcoded config path ties deployment to specific user and directory structure.

The path /home/cloud-user/release-tests/deployment/component-monitor/config.yaml assumes a fixed repository clone location. If the deployment directory changes or the service is provisioned on a different system, this will break.

Consider placing the config in a system location (e.g., /etc/component-monitor/config.yaml) alongside the token, or using a symlink/variable approach for flexibility.

♻️ Suggested improvement
 ExecStart=/usr/bin/podman run --rm --name ship-status-component-monitor \
-    -v /home/cloud-user/release-tests/deployment/component-monitor/config.yaml:/config.yaml:ro,Z \
+    -v /etc/component-monitor/config.yaml:/config.yaml:ro,Z \
     -v /etc/component-monitor/token.txt:/token.txt:ro,Z \

This would require the deployment process to copy/symlink the config to /etc/component-monitor/config.yaml.


9-13: Using :latest tag may cause unexpected behavior on image updates.

While convenient, the :latest tag means the service behavior can change unexpectedly when the upstream image is updated. For production deployments, pinning to a specific version or digest provides more predictable behavior.

If automatic updates are intended, this is acceptable—just be aware of the trade-off.

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@rioliu-rh
Copy link
Contributor Author

@coderabbitai help

@coderabbitai
Copy link

coderabbitai bot commented Feb 12, 2026

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

CodeRabbit commands

These commands are invoked using PR/Issue comments.

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai evaluate custom pre-merge check --instructions <custom-checks-instructions> --name <custom-checks-title> [--mode <error|warning>] to validate the custom pre-merge checks instructions. Defaults to error when --mode is omitted.
    • @coderabbitai ignore pre-merge checks to override pre-merge checks and get an approval on PR.
    • @coderabbitai run pre-merge checks to run pre-merge checks on the pull request.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai generate configuration to create a PR that adds the current resolved configuration as .coderabbit.yaml (or show it if already present).
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit configuration file (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • You can also validate your configuration using the online YAML validator.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, support, documentation and community

  • Visit our status page to check the current availability of CodeRabbit.
  • Create a ticket on our support page for assistance with any issues or questions.
  • Visit our documentation site for detailed information on how to use CodeRabbit.
  • Join our Discord community to connect with other users and get help from the community.
  • Follow us on X/Twitter for updates and announcements.

@rioliu-rh rioliu-rh merged commit 838bc32 into openshift:master Feb 13, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants