[multiple] Co-locate provisionserver with metal3 to prevent DHCP fail…#3738
[multiple] Co-locate provisionserver with metal3 to prevent DHCP fail…#3738mnietoji wants to merge 1 commit intoopenstack-k8s-operators:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
f5c2a2c to
d660efa
Compare
|
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
d660efa to
369ae18
Compare
|
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
369ae18 to
3fa51c9
Compare
|
Merge Failed. This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset. |
3fa51c9 to
d0cf92f
Compare
|
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/3217375613864e0d83f7f88f394dcfaa ❌ openstack-k8s-operators-content-provider FAILURE in 7m 16s |
d0cf92f to
6b9c8b0
Compare
6b9c8b0 to
1339a1d
Compare
1339a1d to
cf58db9
Compare
e29b915 to
08a2b2b
Compare
|
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/0b04bcb1f4f54d518d017da862888f74 ✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 03m 30s |
08a2b2b to
a7b18cb
Compare
…ures II When metal3-dnsmasq pod restarts during a node's DHCP lease renewal on the provisioning network (172.23.0.0/24), NetworkManager fails to renew and sets ipv4.method=disabled. NMState operator then preserves this disabled state, causing permanent loss of provisioning network connectivity on that node. The issue occurs when OpenStackProvisionServer and metal3 pods run on different nodes. If metal3 restarts while a node is attempting DHCP renewal, the temporary unavailability of metal3-dnsmasq causes the renewal to fail. Solution: Automatically detect the node running metal3 pod (via k8s-app=metal3 label) and configure provisionServerNodeSelector in baremetalSetTemplate to schedule OpenStackProvisionServer on the same node. This ensures provisioning network connectivity is maintained because metal3-static-ip-manager maintains a static IP (172.23.0.3) on the metal3 node regardless of dnsmasq restarts. Signed-off-by: Miguel Angel Nieto Jimenez <mnietoji@redhat.com> Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
a7b18cb to
4ec3152
Compare
| {% set _original_ansible_vars = (_original_nodeset.ansible | default({})).ansibleVars | default({}) %} | ||
| {% set _original_bootstrap = _original_ansible_vars.edpm_bootstrap_command | default('') %} | ||
| {% set _needs_policy_json = cifmw_ci_gen_kustomize_values_add_policy_json | default(true) | bool and 'policy.json' not in _original_bootstrap %} | ||
| {% if _needs_policy_json or _original_bootstrap %} | ||
| edpm_bootstrap_command: | | ||
| {% if _original_bootstrap %} | ||
| {{ _original_bootstrap }} | ||
|
|
||
| {% endif %} | ||
| {% if _needs_policy_json %} | ||
| mkdir -p /root/.config/containers/ && echo '{"default":[{"type":"insecureAcceptAnything"}]}' > /root/.config/containers/policy.json | ||
| {% endif %} | ||
| {% endif %} | ||
| {% if cifmw_ci_gen_kustomize_values_registry_logins is defined and cifmw_ci_gen_kustomize_values_registry_logins | length > 0 %} | ||
| edpm_container_registry_logins: | ||
| {% for registry, creds in cifmw_ci_gen_kustomize_values_registry_logins.items() %} | ||
| {{ registry }}: | ||
| {% for username, password in creds.items() %} | ||
| {{ username }}: {{ password }} | ||
| {% endfor %} | ||
| {% endfor %} | ||
| {% endif %} |
There was a problem hiding this comment.
This block are indented at 8 spaces (belonging under ansibleVars), but they're placed after the services: conditional (L63-70) which renders at 4-space indent. When the services conditional evaluates to true, the rendered YAML would look like.
nodeset:
ansible:
ansibleVars:
edpm_fips_mode: ...
edpm_sshd_allowed_ranges: ...
services: # 4-space — leaves ansibleVars scope
- "repo-setup"
- "original-service"
edpm_bootstrap_command: | # 8-space — no valid parent here
...
edpm_container_registry_logins: # 8-space — same problem
...
nodes:
This produces invalid YAML because the parser has already exited the ansible -> ansibleVars context when it hits services::
For example, the common template (common/edpm-nodeset-values/values.yaml.j2) has the correct ordering.
My suggestion, move this block, L72-93, before the services conditional (before L63), right after the edpm_sshd_allowed_ranges block. Just as you did for the other blocks
| loop_control: | ||
| loop_var: registry | ||
| ansible.builtin.include_role: | ||
| name: edpm_pullsecret_sync |
There was a problem hiding this comment.
I can't find the reference for this role. Is it being introduced in a separate PR?
| cifmw_ci_gen_kustomize_values_registry_logins: >- | ||
| {%- set result = {} -%} | ||
| {%- for registry in (cifmw_reproducer_registry_list | default(['registry.stage.redhat.io'])) -%} | ||
| {%- set fact_name = 'cifmw_reproducer_registry_' + (registry | replace('.', '_') | replace('-', '_')) + '_creds_dict' -%} |
There was a problem hiding this comment.
May I ask you to check this part?
L180 tells the role to store credentials in a fact ending with _creds:, but here in this line looks up the fact with an extra _dict suffix.
The vars[fact_name] is defined check in the next line (L188) will always be false, so cifmw_ci_gen_kustomize_values_registry_logins will always be an empty {}. No error is raised because the rescue block only catches exceptions, not a silently empty result.
| - name: Detect metal3 pod node for baremetal nodeset provisioning | ||
| when: | ||
| - stage.path is defined | ||
| - stage.path is match('.*/edpm/nodeset.*') | ||
| block: | ||
| - name: Debug - Entering metal3 detection block | ||
| ansible.builtin.debug: | ||
| msg: "INSIDE metal3 detection block for stage: {{ stage.path }}" | ||
|
|
||
| - name: Get metal3 pod information | ||
| kubernetes.core.k8s_info: | ||
| kubeconfig: "{{ cifmw_openshift_kubeconfig }}" | ||
| kind: Pod | ||
| namespace: openshift-machine-api | ||
| label_selectors: | ||
| - k8s-app=metal3 | ||
| register: _cifmw_kustomize_deploy_metal3_pod_info | ||
|
|
||
| - name: Set metal3 node for provisionserver nodeSelector | ||
| ansible.builtin.set_fact: | ||
| cifmw_kustomize_deploy_metal3_node: "{{ _cifmw_kustomize_deploy_metal3_pod_info.resources[0].spec.nodeName }}" | ||
| cacheable: true | ||
| when: | ||
| - _cifmw_kustomize_deploy_metal3_pod_info.resources is defined | ||
| - _cifmw_kustomize_deploy_metal3_pod_info.resources | length > 0 | ||
|
|
||
| - name: Log metal3 node location | ||
| ansible.builtin.debug: | ||
| msg: "Metal3 pod is running on {{ cifmw_kustomize_deploy_metal3_node }}, provisionserver will be co-located there" | ||
| when: cifmw_kustomize_deploy_metal3_node is defined | ||
|
|
||
| - name: Debug - metal3 pod info result | ||
| ansible.builtin.debug: | ||
| msg: | ||
| - "Resources found: {{ _cifmw_kustomize_deploy_metal3_pod_info.resources | default([]) | length }}" | ||
| - "Variable set: {{ cifmw_kustomize_deploy_metal3_node is defined }}" | ||
| - "Variable value: {{ cifmw_kustomize_deploy_metal3_node | default('NOT_SET') }}" | ||
|
|
There was a problem hiding this comment.
The file execute_step.yml has a very consistent pattern: every task that interacts with a live cluster is gated by not cifmw_kustomize_deploy_generate_crs_only | bool. You can see this pattern used four times in the file:
Line 122 — pre_stage hooks: not cifmw_kustomize_deploy_generate_crs_only | bool
Line 320 — oc apply: not cifmw_kustomize_deploy_generate_crs_only | bool
Line 339 — wait conditions: not cifmw_kustomize_deploy_generate_crs_only | bool
Line 364 — post_stage hooks: not cifmw_kustomize_deploy_generate_crs_only | bool
When cifmw_kustomize_deploy_generate_crs_only: true (dry-run / architecture test mode), all of these are skipped because there is no live cluster. The test only generates and validates YAML artifacts.
The new block at line 136 calls kubernetes.core.k8s_info to query pods on the cluster but is missing this gate.
if I'm not mistaken.. for any scenario with an edpm/nodeset stage path (which is most baremetal scenarios), the k8s_info call will attempt to connect to a cluster that doesn't exist in dry-run mode. The block has no rescue or ignore_errors, so it will fail hard and break architecture tests.
Once you've checked that, here's my suggestion:
....
when:
- not cifmw_kustomize_deploy_generate_crs_only | bool
- stage.path is defined
- stage.path is match('.*/edpm/nodeset.*')
....
…ures
When metal3-dnsmasq pod restarts during a node's DHCP lease renewal on the provisioning network (172.23.0.0/24), NetworkManager fails to renew and sets ipv4.method=disabled. NMState operator then preserves this disabled state, causing permanent loss of provisioning network connectivity on that node.
The issue occurs when OpenStackProvisionServer and metal3 pods run on different nodes. If metal3 restarts while a node is attempting DHCP renewal, the temporary unavailability of metal3-dnsmasq causes the renewal to fail.
Solution:
Automatically detect the node running metal3 pod (via k8s-app=metal3 label) and configure provisionServerNodeSelector in baremetalSetTemplate to schedule OpenStackProvisionServer on the same node. This ensures provisioning network connectivity is maintained because metal3-static-ip-manager maintains a static IP (172.23.0.3) on the metal3 node regardless of dnsmasq restarts.