From e59bab911c4a59609ce410126c17e7e7b9c9b3ca Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Fri, 6 Mar 2026 17:43:15 +0530 Subject: [PATCH 01/16] Create dhi-openshift.md --- content/guides/dhi-openshift.md | 624 ++++++++++++++++++++++++++++++++ 1 file changed, 624 insertions(+) create mode 100644 content/guides/dhi-openshift.md diff --git a/content/guides/dhi-openshift.md b/content/guides/dhi-openshift.md new file mode 100644 index 000000000000..00f5c510a5e2 --- /dev/null +++ b/content/guides/dhi-openshift.md @@ -0,0 +1,624 @@ +----- + +## title: Use Docker Hardened Images with OpenShift +description: Deploy Docker Hardened Images on Red Hat OpenShift Container Platform, covering Security Context Constraints, arbitrary user ID assignment, file permissions, and best practices. +keywords: docker hardened images, dhi, openshift, OCP, SCC, security context constraints, nonroot, distroless, containers, red hat +tags: [“Docker Hardened Images”, “dhi”] +params: +proficiencyLevel: Intermediate +time: 30 minutes +prerequisites: | +- An OpenShift cluster (version 4.11 or later recommended) +- The `oc` CLI authenticated to your cluster +- A Docker Hub account with access to Docker Hardened Images +- Familiarity with OpenShift Security Context Constraints (SCCs) + +Docker Hardened Images (DHI) can be deployed on Red Hat OpenShift Container +Platform, but OpenShift’s security model differs from standard Kubernetes in +ways that require specific configuration. Because OpenShift runs containers +with an arbitrarily assigned user ID rather than the image’s default, you must +adjust file ownership and group permissions in your Dockerfiles to ensure +writable paths remain accessible. + +This guide explains how to deploy Docker Hardened Images in OpenShift +environments, covering Security Context Constraints (SCCs), arbitrary user ID +assignment, file permission requirements, and best practices for both runtime +and development image variants. + +## How OpenShift security differs from Kubernetes + +OpenShift extends Kubernetes with Security Context Constraints (SCCs), which +control what actions a pod can perform and what resources it can access. While +vanilla Kubernetes uses Pod Security Standards (PSS) for similar purposes, SCCs +are more granular and enforced by default. + +The key differences that affect DHI deployments: + +**Arbitrary user IDs.** By default, OpenShift runs containers using an +arbitrarily assigned user ID (UID) from a range allocated to each project. The +default `restricted-v2` SCC (introduced in OpenShift 4.11) uses the +`MustRunAsRange` strategy, which overrides the `USER` directive in the container +image with a UID from the project’s allocated range (typically starting above +1000000000). This means even though a DHI image specifies a nonroot user +(UID 65532), OpenShift will run the container as a different, unpredictable UID. + +**Root group requirement.** OpenShift assigns the arbitrary UID to the root +group (GID 0). The container process always runs with `gid=0(root)`. Any +directories or files that the process needs to write to must be owned by the +root group (GID 0) with group read/write permissions. This is documented in the +[Red Hat guidelines for creating images](https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html#use-uid_create-images). + +> [!IMPORTANT] +> +> DHI images set file ownership to `nonroot:nonroot` (65532:65532) by default. +> Because the OpenShift arbitrary UID is NOT in the `nonroot` group (65532), it +> cannot write to those files — even though the pod is admitted by the SCC and +> the container starts. You must change group ownership to GID 0 for any +> writable path. This is the most common source of permission errors when +> deploying DHI on OpenShift. + +**Capability restrictions.** The `restricted-v2` SCC drops all Linux +capabilities by default and enforces `allowPrivilegeEscalation: false`, +`runAsNonRoot: true`, and a `seccompProfile` of type `RuntimeDefault`. DHI +runtime images already satisfy these constraints because they run as a non-root +user and don’t require elevated capabilities. + +## Pull DHI images into OpenShift + +Before deploying, create an image pull secret so your OpenShift cluster can +authenticate to the DHI registry or your mirrored repository on Docker Hub. + +### Create an image pull secret + +```console +oc create secret docker-registry dhi-pull-secret \ + --docker-server=docker.io \ + --docker-username= \ + --docker-password= \ + --docker-email= +``` + +If you’re pulling directly from `dhi.io` instead of a mirrored repository, set +`--docker-server=dhi.io`. + +### Link the secret to a service account + +Link the pull secret to the `default` service account in your project so that +all deployments can pull DHI images automatically: + +```console +oc secrets link default dhi-pull-secret --for=pull +``` + +To use the secret with a specific service account instead: + +```console +oc secrets link dhi-pull-secret --for=pull +``` + +## Build OpenShift-compatible images from DHI + +DHI runtime images are distroless — they contain no shell, no package manager, +and no `RUN`-capable environment. This means you **cannot use `RUN` commands in +the runtime stage** of your Dockerfile. All file permission adjustments for +OpenShift must happen in the `-dev` build stage, and the results must be copied +into the runtime stage using `COPY --chown`. + +The core pattern for OpenShift compatibility: + +1. Use a DHI `-dev` variant as the build stage (it has a shell). +1. Build your application and set GID 0 ownership in the build stage. +1. Copy the results into the DHI runtime image using `COPY --chown=:0`. + +### Example: Nginx for OpenShift + +```dockerfile +# Build stage — has a shell, can run commands +FROM YOUR_ORG/dhi-nginx:1.29-alpine3.23-dev AS build + +# Copy custom config and set root group ownership +COPY nginx.conf /tmp/nginx.conf +COPY default.conf /tmp/default.conf + +# Prepare writable directories with GID 0 +# (Nginx needs to write to cache, logs, and PID file locations) +RUN mkdir -p /tmp/nginx-cache /tmp/nginx-run && \ + chgrp -R 0 /tmp/nginx-cache /tmp/nginx-run && \ + chmod -R g=u /tmp/nginx-cache /tmp/nginx-run + +# Runtime stage — distroless, NO shell, NO RUN commands +FROM YOUR_ORG/dhi-nginx:1.29-alpine3.23 + +COPY --from=build --chown=65532:0 /tmp/nginx.conf /etc/nginx/nginx.conf +COPY --from=build --chown=65532:0 /tmp/default.conf /etc/nginx/conf.d/default.conf +COPY --from=build --chown=65532:0 /tmp/nginx-cache /var/cache/nginx +COPY --from=build --chown=65532:0 /tmp/nginx-run /var/run +``` + +> [!IMPORTANT] +> +> Always use `--chown=:0` (user:root-group) when copying files into the +> runtime stage. This ensures the arbitrary UID that OpenShift assigns can +> access the files through root group membership. Never use `RUN` in the runtime +> stage — distroless DHI images have no shell. + +> [!NOTE] +> +> The UID for DHI images varies by image. Most use 65532 (`nonroot`), but some +> (like the Node.js image) may use a different UID. Verify with: +> `docker inspect dhi.io/: --format '{{.Config.User}}'` + +Deploy to OpenShift: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-dhi +spec: + replicas: 1 + selector: + matchLabels: + app: nginx-dhi + template: + metadata: + labels: + app: nginx-dhi + spec: + containers: + - name: nginx + image: YOUR_ORG/dhi-nginx:1.29-alpine3.23 + ports: + - containerPort: 8080 + securityContext: + allowPrivilegeEscalation: false + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL + imagePullSecrets: + - name: dhi-pull-secret +``` + +DHI Nginx listens on port 8080 by default (not 80), which is compatible with +the non-root requirement. No SCC changes are needed. + +### Example: Node.js application for OpenShift + +```dockerfile +# Build stage — dev variant has shell and npm +FROM YOUR_ORG/dhi-node:24-alpine3.23-dev AS build +WORKDIR /app +COPY package*.json ./ +RUN npm ci +COPY . . +RUN npm run build + +# Set GID 0 on everything the runtime needs to write +RUN chgrp -R 0 /app/dist /app/node_modules && \ + chmod -R g=u /app/dist /app/node_modules + +# Runtime stage — distroless, NO shell +FROM YOUR_ORG/dhi-node:24-alpine3.23 +WORKDIR /app +COPY --from=build --chown=65532:0 /app/dist ./dist +COPY --from=build --chown=65532:0 /app/node_modules ./node_modules +CMD ["node", "dist/index.js"] +``` + +Deploy: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: node-app +spec: + replicas: 2 + selector: + matchLabels: + app: node-app + template: + metadata: + labels: + app: node-app + spec: + containers: + - name: app + image: YOUR_ORG/dhi-node-app:latest + ports: + - containerPort: 3000 + securityContext: + allowPrivilegeEscalation: false + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL + imagePullSecrets: + - name: dhi-pull-secret +``` + +## Handle arbitrary user IDs + +OpenShift’s `restricted-v2` SCC assigns a random UID to the container process. +This UID won’t exist in `/etc/passwd` inside the image, but the container will +still run — the process just won’t have a username associated with it. + +This can cause issues with applications that: + +- Look up the current user’s home directory or username +- Write to directories owned by a specific UID +- Check `/etc/passwd` for the running user + +### Add a passwd entry for the arbitrary UID + +Some applications (notably those using certain Python or Java libraries) require +a valid `/etc/passwd` entry for the running user. You can handle this with a +wrapper entrypoint script. + +Because this pattern requires a shell, it only works with DHI `-dev` variants or +with a DHI Enterprise customized image that includes a shell. Prepare the image +in the build stage: + +```dockerfile +FROM YOUR_ORG/dhi-python:3.13-alpine3.23-dev AS build +# ... build your application ... + +# Make /etc/passwd group-writable so the entrypoint can append to it +RUN chgrp 0 /etc/passwd && chmod g=u /etc/passwd + +# Create the entrypoint wrapper +RUN printf '#!/bin/sh\n\ +if ! whoami > /dev/null 2>&1; then\n\ + if [ -w /etc/passwd ]; then\n\ + echo "${USER_NAME:-appuser}:x:$(id -u):0:dynamic user:/tmp:/sbin/nologin" >> /etc/passwd\n\ + fi\n\ +fi\n\ +exec "$@"\n' > /entrypoint.sh && chmod +x /entrypoint.sh + +# This pattern requires a -dev variant as runtime (has shell) +FROM YOUR_ORG/dhi-python:3.13-alpine3.23-dev +COPY --from=build --chown=65532:0 /app ./app +COPY --from=build --chown=65532:0 /entrypoint.sh /entrypoint.sh +COPY --from=build --chown=65532:0 /etc/passwd /etc/passwd +USER 65532 +ENTRYPOINT ["/entrypoint.sh"] +CMD ["python", "app/main.py"] +``` + +> [!NOTE] +> +> For distroless runtime images (no shell), the passwd-injection pattern is not +> possible. Instead, use the `nonroot` SCC (described below) to run with the +> image’s built-in UID so the existing `/etc/passwd` entry matches the running +> process. Alternatively, OpenShift 4.x automatically injects the arbitrary UID +> into `/etc/passwd` in most cases, which resolves this for many applications. + +## Use the nonroot SCC for fixed UIDs + +If your application requires running as the specific UID defined in the image +(typically 65532 for DHI), you can use the `nonroot` SCC instead of the default +`restricted-v2`. The `nonroot` SCC uses the `MustRunAsNonRoot` strategy, which +allows any non-zero UID. + +> [!IMPORTANT] +> +> For the `nonroot` SCC to work, the image’s `USER` directive must specify a +> **numeric** UID (for example, `65532`), not a username string like `nonroot`. +> OpenShift cannot verify that a username maps to a non-zero UID. Verify your +> DHI image with: +> `docker inspect YOUR_ORG/dhi-node:24-alpine3.23 --format '{{.Config.User}}'` +> If the output is a string rather than a number, set `runAsUser` explicitly in +> the pod spec. + +Create a service account and grant it the `nonroot` SCC: + +```console +oc create serviceaccount dhi-nonroot +oc adm policy add-scc-to-user nonroot -z dhi-nonroot +``` + +Reference the service account in your deployment: + +```yaml +spec: + template: + spec: + serviceAccountName: dhi-nonroot + containers: + - name: app + image: YOUR_ORG/dhi-node:24-alpine3.23 + securityContext: + runAsUser: 65532 + runAsNonRoot: true + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL +``` + +Verify the SCC assignment after deployment: + +```console +oc get pod -o jsonpath='{.metadata.annotations.openshift\.io/scc}' +``` + +This should return `nonroot`. + +When using the `nonroot` SCC with a fixed UID, the process runs as 65532 +(matching the image’s file ownership), so the GID 0 adjustments are not strictly +required for paths already owned by 65532. However, applying `chown :0` is +still recommended for portability across both `restricted-v2` and `nonroot` SCCs. + +## Use DHI dev variants in OpenShift + +DHI `-dev` variants include a shell, package manager, and development tools. +They run as root (UID 0) by default, which conflicts with OpenShift’s +`restricted-v2` SCC. There are three approaches: + +### Option 1: Use dev variants only in build stages (recommended) + +Use `-dev` variants only in Dockerfile build stages and never deploy them +directly to OpenShift: + +```dockerfile +FROM YOUR_ORG/dhi-node:24-alpine3.23-dev AS build +WORKDIR /app +COPY package*.json ./ +RUN npm ci +COPY . . +RUN npm run build + +# Set root group ownership for OpenShift compatibility +RUN chgrp -R 0 /app/dist /app/node_modules && \ + chmod -R g=u /app/dist /app/node_modules + +FROM YOUR_ORG/dhi-node:24-alpine3.23 +WORKDIR /app +COPY --from=build --chown=65532:0 /app/dist ./dist +COPY --from=build --chown=65532:0 /app/node_modules ./node_modules +CMD ["node", "dist/index.js"] +``` + +The final runtime image is non-root and distroless, fully compatible with +`restricted-v2`. + +### Option 2: Grant the anyuid SCC for debugging + +If you need to run a `-dev` variant directly in OpenShift for debugging, grant +the `anyuid` SCC to a dedicated service account: + +```console +oc create serviceaccount dhi-debug +oc adm policy add-scc-to-user anyuid -z dhi-debug +``` + +Then reference it in your pod: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: dhi-debug +spec: + serviceAccountName: dhi-debug + containers: + - name: debug + image: YOUR_ORG/dhi-node:24-alpine3.23-dev + command: ["sleep", "infinity"] + imagePullSecrets: + - name: dhi-pull-secret +``` + +> [!IMPORTANT] +> +> The `anyuid` SCC allows running as any UID including root. Only use this for +> temporary debugging — never in production workloads. + +### Option 3: Use oc debug or ephemeral containers + +For distroless runtime images with no shell, use OpenShift-native debugging +tools instead of `docker debug` (which only works with Docker Engine, not with +CRI-O on OpenShift). + +Use `oc debug` to create a copy of a pod with a debug shell: + +```console +# Create a debug pod based on a deployment +oc debug deployment/nginx-dhi + +# Override the image to use a -dev variant with a shell +oc debug deployment/nginx-dhi --image=YOUR_ORG/dhi-node:24-alpine3.23-dev +``` + +Use ephemeral containers (OpenShift 4.12+ / Kubernetes 1.25+): + +```console +kubectl debug -it --image=YOUR_ORG/dhi-node:24-alpine3.23-dev \ + --target=app -- sh +``` + +This attaches a temporary debug container to a running pod without restarting +it, sharing the pod’s process namespace. + +> [!NOTE] +> +> `docker debug` is a Docker Desktop/CLI feature for local development. It is +> not available on OpenShift clusters, which use CRI-O as their container +> runtime. + +## Deploy DHI Helm charts on OpenShift + +DHI provides pre-configured Helm charts for popular applications. When deploying +these charts on OpenShift, you may need to adjust security context settings. + +### Inspect chart values first + +Before installing, check what security context values the chart exposes: + +```console +helm registry login dhi.io + +helm show values oci://dhi.io/ --version | grep -A 20 securityContext +``` + +The available value paths vary by chart, so always check `values.yaml` before +setting overrides. + +### Install with OpenShift overrides + +The following example shows a typical installation pattern. Adjust the `--set` +paths based on what `helm show values` returns for your specific chart: + +```console +helm install my-release oci://dhi.io/ \ + --version \ + --set "imagePullSecrets[0].name=dhi-pull-secret" \ + -f openshift-values.yaml +``` + +Create an `openshift-values.yaml` with security context overrides appropriate +for your chart: + +```yaml +# Example — adjust keys based on `helm show values` output +podSecurityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + +securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL +``` + +> [!NOTE] +> +> DHI Helm chart value paths are not standardized across charts. For example, +> one chart may use `image.imagePullSecrets`, while another uses +> `global.imagePullSecrets`. Always consult the specific chart’s documentation +> or `values.yaml`. + +## Verify your deployment + +After deploying a DHI image to OpenShift, verify the security configuration. + +### Check the assigned SCC + +```console +oc get pods -o 'custom-columns=NAME:.metadata.name,SCC:.metadata.annotations.openshift\.io/scc' +``` + +Runtime DHI images should show `restricted-v2` (or `nonroot` if you configured +it). + +### Check the running UID + +```console +oc exec -- id +``` + +With the `restricted-v2` SCC, you should see output like: + +``` +uid=1000650000 gid=0(root) groups=0(root),1000650000 +``` + +The UID is from the project’s allocated range, and the primary GID is always 0 +(root group). With the `nonroot` SCC and `runAsUser: 65532`, you would see +`uid=65532`. + +### Confirm the image is distroless + +```console +oc exec -- sh -c "echo hello" +``` + +For runtime (non-dev) DHI images, this command should fail with an error +indicating that `sh` was not found in `$PATH`. The exact error format varies +between CRI-O versions. + +### Scan the deployed image + +Use Docker Scout to verify the security posture of the deployed image (run this +from your local machine, not on the cluster): + +```console +docker scout cves YOUR_ORG/dhi-nginx:1.29-alpine3.23 +docker scout quickview YOUR_ORG/dhi-nginx:1.29-alpine3.23 +``` + +## Common issues and solutions + +**Pod fails to start with “container has runAsNonRoot and image has group or +user ID set to root.”** This happens when deploying a DHI `-dev` variant with +the default `restricted-v2` SCC. Either use the runtime variant instead, or +grant the `anyuid` SCC to the service account. + +**Application cannot write to a directory.** The arbitrary UID assigned by +OpenShift doesn’t have write permissions. This is the most common issue with DHI +on OpenShift. All writable paths must be owned by GID 0 with group write +permissions. Fix this in the build stage: +`chgrp -R 0 /path && chmod -R g=u /path`, then `COPY --chown=:0` into the +runtime stage. + +**Application fails with “user not found” or “no matching entries in passwd +file.”** Some applications require a valid `/etc/passwd` entry. OpenShift 4.x +automatically injects the arbitrary UID into `/etc/passwd` in most cases. If +your application still fails, use the passwd-injection pattern (requires a `-dev` +variant) or use the `nonroot` SCC to run with the image’s built-in UID. + +**Pod fails to bind to port 80 or 443.** Ports below 1024 require root +privileges. DHI images use unprivileged ports by default (for example, Nginx +uses 8080). Configure your OpenShift Service to map the external port to the +container’s unprivileged port: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: nginx-dhi +spec: + ports: + - port: 80 + targetPort: 8080 + selector: + app: nginx-dhi +``` + +**ImagePullBackOff with “unauthorized: authentication required.”** Verify the +pull secret is correctly configured and linked to the service account. Check +with `oc get secret dhi-pull-secret` and `oc describe sa default`. + +**Dockerfile build fails with “exec: not found” in runtime stage.** You are +using `RUN` in a distroless runtime stage. DHI runtime images have no shell, so +`RUN` commands cannot execute. Move all `RUN` commands to the `-dev` build stage +and use `COPY --chown` to transfer results. + +## DHI and OpenShift compatibility summary + +|Feature |DHI runtime |DHI `-dev` |DHI with Enterprise customization| +|-----------------------------|---------------------------------|--------------------|---------------------------------| +|Default SCC (`restricted-v2`)|Yes, with GID 0 permissions |Requires `anyuid` |Yes, with GID 0 permissions | +|Non-root by default |Yes (UID 65532) |No (root) |Yes (configurable UID) | +|Arbitrary UID support |Yes, with `chown :0` |Yes |Yes, with `chown :0` | +|Distroless (no shell) |Yes — no `RUN` in Dockerfile |No |Yes — no `RUN` in Dockerfile | +|Unprivileged ports |Yes (above 1024) |Configurable |Yes (above 1024) | +|SLSA Build Level 3 |Yes |Yes |Yes | +|Debug on cluster |`oc debug` / ephemeral containers|`oc exec` with shell|`oc debug` / ephemeral containers| + +## What’s next + +- [Use an image in Kubernetes](/dhi/how-to/k8s/) — general DHI Kubernetes deployment guide. +- [Customize an image](/dhi/how-to/customize/) — add packages to DHI images using Enterprise customization. +- [Debug a container](/dhi/how-to/debug/) — troubleshoot distroless containers with Docker Debug (local development). +- [Managing SCCs](https://docs.openshift.com/container-platform/4.14/authentication/managing-security-context-constraints.html) — Red Hat’s reference documentation on Security Context Constraints. +- [Creating images for OpenShift](https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html) — Red Hat’s guidelines for building OpenShift-compatible container images. From d5a6309099383b98059d2008785a16d09b602261 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 01:48:15 +0530 Subject: [PATCH 02/16] Update dhi-openshift.md --- content/guides/dhi-openshift.md | 42 ++++++++++++++++----------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/content/guides/dhi-openshift.md b/content/guides/dhi-openshift.md index 00f5c510a5e2..d264f74dac7e 100644 --- a/content/guides/dhi-openshift.md +++ b/content/guides/dhi-openshift.md @@ -1,17 +1,17 @@ ------ - -## title: Use Docker Hardened Images with OpenShift +--- +title: Use Docker Hardened Images with Red Hat OpenShift description: Deploy Docker Hardened Images on Red Hat OpenShift Container Platform, covering Security Context Constraints, arbitrary user ID assignment, file permissions, and best practices. -keywords: docker hardened images, dhi, openshift, OCP, SCC, security context constraints, nonroot, distroless, containers, red hat -tags: [“Docker Hardened Images”, “dhi”] +keywords: docker hardened images, dhi, openshift, OCP, SCC, security context constraints, non-root, distroless, containers, red hat +tags: ["Docker Hardened Images", "dhi"] params: -proficiencyLevel: Intermediate -time: 30 minutes -prerequisites: | -- An OpenShift cluster (version 4.11 or later recommended) -- The `oc` CLI authenticated to your cluster -- A Docker Hub account with access to Docker Hardened Images -- Familiarity with OpenShift Security Context Constraints (SCCs) + proficiencyLevel: Intermediate + time: 30 minutes + prerequisites: + - An OpenShift cluster (version 4.11 or later recommended) + - The oc CLI authenticated to your cluster + - A Docker Hub account with access to Docker Hardened Images + - Familiarity with OpenShift Security Context Constraints (SCCs) +--- Docker Hardened Images (DHI) can be deployed on Red Hat OpenShift Container Platform, but OpenShift’s security model differs from standard Kubernetes in @@ -38,8 +38,8 @@ The key differences that affect DHI deployments: arbitrarily assigned user ID (UID) from a range allocated to each project. The default `restricted-v2` SCC (introduced in OpenShift 4.11) uses the `MustRunAsRange` strategy, which overrides the `USER` directive in the container -image with a UID from the project’s allocated range (typically starting above -1000000000). This means even though a DHI image specifies a nonroot user +image with a UID from the project’s allocated range (typically starting higher than +1000000000). This means even though a DHI image specifies a non-root user (UID 65532), OpenShift will run the container as a different, unpredictable UID. **Root group requirement.** OpenShift assigns the arbitrary UID to the root @@ -254,7 +254,7 @@ This can cause issues with applications that: - Write to directories owned by a specific UID - Check `/etc/passwd` for the running user -### Add a passwd entry for the arbitrary UID +### Add a `passwd` entry for the arbitrary UID Some applications (notably those using certain Python or Java libraries) require a valid `/etc/passwd` entry for the running user. You can handle this with a @@ -293,12 +293,12 @@ CMD ["python", "app/main.py"] > [!NOTE] > > For distroless runtime images (no shell), the passwd-injection pattern is not -> possible. Instead, use the `nonroot` SCC (described below) to run with the +> possible. Instead, use the `nonroot` SCC (described in the following section) to run with the > image’s built-in UID so the existing `/etc/passwd` entry matches the running > process. Alternatively, OpenShift 4.x automatically injects the arbitrary UID > into `/etc/passwd` in most cases, which resolves this for many applications. -## Use the nonroot SCC for fixed UIDs +## Use the non-root SCC for fixed UIDs If your application requires running as the specific UID defined in the image (typically 65532 for DHI), you can use the `nonroot` SCC instead of the default @@ -389,7 +389,7 @@ CMD ["node", "dist/index.js"] The final runtime image is non-root and distroless, fully compatible with `restricted-v2`. -### Option 2: Grant the anyuid SCC for debugging +### Option 2: Grant the `anyuid` SCC for debugging If you need to run a `-dev` variant directly in OpenShift for debugging, grant the `anyuid` SCC to a dedicated service account: @@ -421,7 +421,7 @@ spec: > The `anyuid` SCC allows running as any UID including root. Only use this for > temporary debugging — never in production workloads. -### Option 3: Use oc debug or ephemeral containers +### Option 3: Use `oc debug` or ephemeral containers For distroless runtime images with no shell, use OpenShift-native debugging tools instead of `docker debug` (which only works with Docker Engine, not with @@ -576,7 +576,7 @@ automatically injects the arbitrary UID into `/etc/passwd` in most cases. If your application still fails, use the passwd-injection pattern (requires a `-dev` variant) or use the `nonroot` SCC to run with the image’s built-in UID. -**Pod fails to bind to port 80 or 443.** Ports below 1024 require root +**Pod fails to bind to port 80 or 443.** Ports lower than 1024 require root privileges. DHI images use unprivileged ports by default (for example, Nginx uses 8080). Configure your OpenShift Service to map the external port to the container’s unprivileged port: @@ -611,7 +611,7 @@ and use `COPY --chown` to transfer results. |Non-root by default |Yes (UID 65532) |No (root) |Yes (configurable UID) | |Arbitrary UID support |Yes, with `chown :0` |Yes |Yes, with `chown :0` | |Distroless (no shell) |Yes — no `RUN` in Dockerfile |No |Yes — no `RUN` in Dockerfile | -|Unprivileged ports |Yes (above 1024) |Configurable |Yes (above 1024) | +|Unprivileged ports |Yes (higher than 1024) |Configurable |Yes (higher than 1024) | |SLSA Build Level 3 |Yes |Yes |Yes | |Debug on cluster |`oc debug` / ephemeral containers|`oc exec` with shell|`oc debug` / ephemeral containers| From 45e21819ac3a754ec10e88833fa1fc82dbba9bdc Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 01:53:46 +0530 Subject: [PATCH 03/16] Update dhi-openshift.md --- content/guides/dhi-openshift.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/guides/dhi-openshift.md b/content/guides/dhi-openshift.md index d264f74dac7e..b0e2ec87b8e6 100644 --- a/content/guides/dhi-openshift.md +++ b/content/guides/dhi-openshift.md @@ -528,7 +528,7 @@ oc exec -- id With the `restricted-v2` SCC, you should see output like: -``` +```text uid=1000650000 gid=0(root) groups=0(root),1000650000 ``` From db04a0d2df990683f6f23c0210045d171e546c3a Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 08:48:55 +0530 Subject: [PATCH 04/16] Create dhi-backstage.md --- content/guides/dhi-backstage.md | 469 ++++++++++++++++++++++++++++++++ 1 file changed, 469 insertions(+) create mode 100644 content/guides/dhi-backstage.md diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md new file mode 100644 index 000000000000..21ff7f0bde22 --- /dev/null +++ b/content/guides/dhi-backstage.md @@ -0,0 +1,469 @@ +--- +title: Secure a Backstage application with Docker Hardened Images +description: Secure a Backstage developer portal container using Docker Hardened Images, covering native module compilation, multi-stage builds, Socket Firewall protection, and distroless runtime images. +keywords: docker hardened images, dhi, backstage, CNCF, developer portal, node.js, native modules, sqlite, better-sqlite3, distroless, socket firewall, dhictl, multi-stage build +tags: ["Docker Hardened Images", "dhi"] +params: + proficiencyLevel: Intermediate + time: 45 minutes + prerequisites: + - Docker Desktop or Docker Engine with BuildKit enabled + - A Docker Hub account authenticated with docker login and docker login dhi.io + - A Backstage project created with @backstage/create-app + - Basic familiarity with multi-stage Dockerfiles and Node.js native modules +--- + +This guide shows how to secure a Backstage application using Docker Hardened Images (DHI). Backstage is a CNCF open source developer portal used by thousands of organizations to manage their software catalogs, templates, and developer tooling. + +By the end of this guide, you'll have a Backstage container image that is distroless, runs as a non-root user by default, and has dramatically fewer CVEs than the standard `node:24-trixie-slim` base image while still supporting the native module compilation that Backstage requires. + +## Prerequisites + +- Docker Desktop or Docker Engine with BuildKit enabled +- A Docker Hub account authenticated with `docker login` and `docker login dhi.io` +- A Backstage project created with `@backstage/create-app` + +## Why Backstage needs customization + +The DHI migration examples cover applications where you can swap the base image and everything works. Backstage is different. It uses `better-sqlite3` and other packages that compile native Node.js modules at install time, which means the build stage needs `g++`, `make`, `python3`, and `sqlite-dev` — none of which are in the base `dhi.io/node` image. The runtime image only needs the shared library (`sqlite-libs`) that the compiled native module links against. + +This is a common pattern. Any Node.js application that depends on native addons (such as `bcrypt`, `sharp`, `sqlite3`, or `node-canvas`) faces the same challenge. The approach in this guide applies to all of them. + +## Step 1: Examine the original Dockerfile + +The official Backstage documentation recommends a multi-stage Dockerfile using `node:24-trixie-slim` (Debian). A typical setup looks like this: + +```dockerfile +# Stage 1 - Create yarn install skeleton layer +FROM node:24-trixie-slim AS packages +WORKDIR /app +COPY backstage.json package.json yarn.lock ./ +COPY .yarn ./.yarn +COPY .yarnrc.yml ./ +COPY packages packages +COPY plugins plugins +RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 \ + -exec rm -rf {} \+ + +# Stage 2 - Install dependencies and build packages +FROM node:24-trixie-slim AS build +ENV PYTHON=/usr/bin/python3 +RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ + --mount=type=cache,target=/var/lib/apt,sharing=locked \ + apt-get update && \ + apt-get install -y --no-install-recommends python3 g++ build-essential && \ + rm -rf /var/lib/apt/lists/* +RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ + --mount=type=cache,target=/var/lib/apt,sharing=locked \ + apt-get update && \ + apt-get install -y --no-install-recommends libsqlite3-dev && \ + rm -rf /var/lib/apt/lists/* +USER node +WORKDIR /app +COPY --from=packages --chown=node:node /app . +RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \ + yarn install --immutable +COPY --chown=node:node . . +RUN yarn tsc +RUN yarn --cwd packages/backend build +RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \ + && tar xzf packages/backend/dist/skeleton.tar.gz \ + -C packages/backend/dist/skeleton \ + && tar xzf packages/backend/dist/bundle.tar.gz \ + -C packages/backend/dist/bundle + +# Stage 3 - Build the actual backend image +FROM node:24-trixie-slim +ENV PYTHON=/usr/bin/python3 +RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ + --mount=type=cache,target=/var/lib/apt,sharing=locked \ + apt-get update && \ + apt-get install -y --no-install-recommends python3 g++ build-essential && \ + rm -rf /var/lib/apt/lists/* +RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ + --mount=type=cache,target=/var/lib/apt,sharing=locked \ + apt-get update && \ + apt-get install -y --no-install-recommends libsqlite3-dev && \ + rm -rf /var/lib/apt/lists/* +USER node +WORKDIR /app +COPY --from=build --chown=node:node /app/.yarn ./.yarn +COPY --from=build --chown=node:node /app/.yarnrc.yml ./ +COPY --from=build --chown=node:node /app/backstage.json ./ +COPY --from=build --chown=node:node /app/yarn.lock \ + /app/package.json \ + /app/packages/backend/dist/skeleton/ ./ +RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \ + yarn workspaces focus --all --production +COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./ +CMD ["node", "packages/backend", "--config", "app-config.yaml"] +``` + +Run this image and inspect what's available inside the container: + +``` +docker build -t backstage:init . +docker run -d \ + -e APP_CONFIG_backend_database_client='better-sqlite3' \ + -e APP_CONFIG_backend_database_connection=':memory:' \ + -e APP_CONFIG_auth_providers_guest_dangerouslyAllowOutsideDevelopment='true' \ + -p 7007:7007 \ + -u 1000 \ + --cap-drop=ALL \ + --read-only \ + --tmpfs /tmp \ + backstage:init +``` + +This works, but the runtime container has a shell, a package manager, and yarn. None of these are needed to run Backstage. Run `docker exec` to see what's accessible inside: + +``` +docker exec -it sh +$ cat /etc/shells +# /etc/shells: valid login shells +/bin/sh +/usr/bin/sh +/bin/bash +/usr/bin/bash +/bin/rbash +/usr/bin/rbash +/usr/bin/dash +$ yarn --version +4.12.0 +$ dpkg --version +dpkg version 1.22.11 (arm64). +$ whoami +node +$ id +uid=1000(node) gid=1000(node) groups=1000(node) +``` + +The `node:24-trixie-slim` image ships with three shells (`dash`, `bash`, and `rbash`), a package manager (`dpkg`), and `yarn`. Each of these tools increases the attack surface. An attacker who gains access to this container could use them for lateral movement across your infrastructure. + +## Step 2: Switch the build stages to DHI + +Replace all three stages with DHI equivalents. DHI Node.js images use Alpine, so the package installation commands change from `apt-get` to `apk`: + +```dockerfile +# Stage 1: prepare packages +FROM --platform=$BUILDPLATFORM dhi.io/node:24-alpine3.23-dev AS packages +WORKDIR /app +COPY backstage.json package.json yarn.lock ./ +COPY .yarn ./.yarn +COPY .yarnrc.yml ./ +COPY packages packages +COPY plugins plugins +RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 \ + -exec rm -rf {} \+ + +# Stage 2: build the application +FROM --platform=$BUILDPLATFORM dhi.io/node:24-alpine3.23-dev AS build +ENV PYTHON=/usr/bin/python3 +RUN apk add --no-cache g++ make python3 sqlite-dev && \ + rm -rf /var/lib/apk/lists/* +WORKDIR /app +COPY --from=packages --chown=node:node /app . +RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \ + yarn install --immutable +COPY --chown=node:node . . +RUN yarn tsc +RUN yarn --cwd packages/backend build +RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \ + && tar xzf packages/backend/dist/skeleton.tar.gz \ + -C packages/backend/dist/skeleton \ + && tar xzf packages/backend/dist/bundle.tar.gz \ + -C packages/backend/dist/bundle + +# Final Stage: create the runtime image +FROM dhi.io/node:24-alpine3.23-dev +ENV PYTHON=/usr/bin/python3 +RUN apk add --no-cache g++ make python3 sqlite-dev && \ + rm -rf /var/lib/apk/lists/* +WORKDIR /app +COPY --from=build --chown=node:node /app/.yarn ./.yarn +COPY --from=build --chown=node:node /app/.yarnrc.yml ./ +COPY --from=build --chown=node:node /app/backstage.json ./ +COPY --from=build --chown=node:node /app/yarn.lock \ + /app/package.json \ + /app/packages/backend/dist/skeleton/ ./ +RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \ + yarn workspaces focus --all --production \ + && rm -rf "$(yarn cache clean)" +COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./ +CMD ["node", "packages/backend", "--config", "app-config.yaml"] +``` + +Build and tag this version: + +``` +docker build -t backstage:dhi-dev . +``` + +> **Note** +> +> The `-dev` variant includes a shell and package manager, which is why `apk add` works. Backstage requires `python3` and native build tools in the runtime image because `yarn workspaces focus --all --production` recompiles native modules during the production install. This is specific to Backstage's build process — most Node.js applications can use the standard (non-dev) DHI runtime variant without additional packages. + +The DHI images come with attestations that the original `node:24-trixie-slim` images don't have. Check what's attached: + +``` +docker scout attest list dhi.io/node:24-alpine3.23 +``` + +DHI images ship with 15 attestations including CycloneDX SBOM, SLSA provenance, OpenVEX, Scout health reports, secret scans, virus/malware reports, and an SLSA verification summary. + +## Step 3: Add Socket Firewall protection + +DHI provides `-sfw` (Socket Firewall) variants for Node.js images. Socket Firewall intercepts `npm` and `yarn` commands during the build to detect and block malicious packages before they execute install scripts. + +To enable Socket Firewall, change the `-dev` tags to `-sfw-dev` in all three stages. The SFW version of the Dockerfile: + +```dockerfile +# Stage 1: prepare packages +FROM --platform=$BUILDPLATFORM dhi.io/node:24-alpine3.23-sfw-dev AS packages +WORKDIR /app +COPY backstage.json package.json yarn.lock ./ +COPY .yarn ./.yarn +COPY .yarnrc.yml ./ +COPY packages packages +COPY plugins plugins +RUN find packages \! -name "package.json" -mindepth 2 -maxdepth 2 \ + -exec rm -rf {} \+ + +# Stage 2: build the packages +FROM --platform=$BUILDPLATFORM dhi.io/node:24-alpine3.23-sfw-dev AS build-packages +ENV PYTHON=/usr/bin/python3 +RUN apk add --no-cache g++ make python3 sqlite-dev && \ + rm -rf /var/lib/apk/lists/* +WORKDIR /app +COPY --from=packages --chown=node:node /app . +RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \ + yarn install --immutable +COPY --chown=node:node . . +RUN yarn tsc +RUN yarn --cwd packages/backend build +RUN mkdir packages/backend/dist/skeleton packages/backend/dist/bundle \ + && tar xzf packages/backend/dist/skeleton.tar.gz \ + -C packages/backend/dist/skeleton \ + && tar xzf packages/backend/dist/bundle.tar.gz \ + -C packages/backend/dist/bundle + +# Final Stage: create the runtime image +FROM dhi.io/node:24-alpine3.23-sfw-dev +ENV PYTHON=/usr/bin/python3 +RUN apk add --no-cache g++ make python3 sqlite-dev && \ + rm -rf /var/lib/apk/lists/* +WORKDIR /app +COPY --from=build-packages --chown=node:node /app/.yarn ./.yarn +COPY --from=build-packages --chown=node:node /app/.yarnrc.yml ./ +COPY --from=build-packages --chown=node:node /app/backstage.json ./ +COPY --from=build-packages --chown=node:node /app/yarn.lock \ + /app/package.json \ + /app/packages/backend/dist/skeleton/ ./ +RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \ + yarn workspaces focus --all --production \ + && rm -rf "$(yarn cache clean)" +COPY --from=build-packages --chown=node:node /app/packages/backend/dist/bundle/ ./ +CMD ["node", "packages/backend", "--config", "app-config.yaml"] +``` + +Build this version: + +``` +docker build -t backstage:dhi-sfw-dev . +``` + +When you build, you'll see Socket Firewall messages in the build output: `Protected by Socket Firewall` for any `yarn` and `npm` commands executed in the Dockerfile or in the running containers. + +> **Tip** +> +> The `-sfw-dev` variant is larger (1.9 GB versus 1.72 GB) because Socket Firewall adds monitoring tooling. The security benefit during `yarn install` outweighs the size increase. + +## Step 4: Remove the shell and the package manager with DHI Enterprise customizations + +The previous steps still use the `-dev` or `-sfw-dev` variant as the runtime image, which includes a shell and package manager. DHI Enterprise customizations let you start from the base (non-dev) image — which has no shell and no package manager — and add only the runtime libraries and language runtimes your application needs. + +> **Important** +> +> Only add runtime libraries to the customization, not build tools or language runtimes as system packages. Compile everything in the build stage (which uses the `-dev` variant), copy the pre-built `node_modules` into the runtime image, and use an OCI artifact for any language runtimes needed at runtime. + +For Backstage, the runtime image needs: + +- **sqlite-libs** — the shared library that the compiled `better-sqlite3` native module links against (added as a system package). +- **Python** — if your Backstage plugins or configuration require Python at runtime. Added as an OCI artifact using the hardened `dhi.io/python` image, which layers the Python runtime onto the Node.js base without introducing a package manager or shell. + +Docker will continuously build with SLSA Level 3 compliance and patch these customized images within the guaranteed SLA for CVE patching. + +### Using the Docker Hub UI + +After you mirror the Node.js DHI repository to your organization's namespace: + +1. Open the mirrored Node.js repository in Docker Hub. +2. Select **Customize** and choose the `node:24-alpine3.23` tag. +3. Under **Packages**, add `sqlite-libs`. +4. Under **OCI artifacts**, select your mirrored `dhi-python` repository and include the `/opt/python` path to layer the Python runtime into the image. +5. Create the customization. + +For more information, see [Customize an image](#). + +### Using the `dhictl` CLI + +`dhictl` is Docker's command-line tool for managing Docker Hardened Images. It lets you browse the DHI catalog, mirror images, and create customizations directly from your terminal — making it easy to integrate DHI into CI/CD pipelines and infrastructure-as-code workflows. You can install `dhictl` as a standalone binary or as a Docker CLI plugin (`docker dhi`); it will also be available by default in Docker Desktop soon. + +Rather than writing the customization YAML by hand, use `dhictl` to scaffold a starting point: + +``` +dhictl customization prepare --org YOUR_ORG node 24-alpine3.23 \ + --destination YOUR_ORG/dhi-node \ + --name "backstage" \ + --tag-suffix "_backstage" \ + --output node-backstage.yaml +``` + +Edit the generated file to add the runtime library and the Python OCI artifact: + +```yaml +name: backstage + +source: dhi/node +tag_definition_id: node/alpine-3.23/24 + +destination: YOUR_ORG/dhi-node +tag_suffix: _backstage + +platforms: + - linux/amd64 + - linux/arm64 + +contents: + packages: + - sqlite-libs + artifacts: + - name: YOUR_ORG/dhi-python:3.14-alpine3.23 + includes: + - /opt/python + +accounts: + root: true + runs-as: node + users: + - name: node + uid: 1000 + groups: + - name: node + gid: 1000 + +environment: + PYTHON: /opt/python/bin/python3 + +cmd: + - node +``` + +Then create the customization: + +``` +dhictl customization create --org YOUR_ORG node-backstage.yaml +``` + +Monitor the build progress: + +``` +dhictl customization build list --org YOUR_ORG YOUR_ORG/dhi-node "backstage" +``` + +Docker builds the customized image on its secure infrastructure and publishes it as `YOUR_ORG/dhi-node:24-alpine3.23_backstage`. + +> **Note** +> +> If your Backstage configuration does not require Python at runtime, you can omit the `artifacts` and `environment` sections from the YAML. The `sqlite-libs` package alone is sufficient to run Backstage with `better-sqlite3`. + +### Updated Dockerfile + +Update only the final stage of your Dockerfile to use the customized image: + +```dockerfile +# Final Stage: create the runtime image +FROM YOUR_ORG/dhi-node:24-alpine3.23_backstage +ENV PYTHON=/opt/python/bin/python3 +WORKDIR /app +COPY --from=build --chown=node:node /app/node_modules ./node_modules +COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./ +CMD ["node", "packages/backend", "--config", "app-config.yaml"] +``` + +> **Important** +> +> When the Python runtime is added as an OCI artifact, it installs under `/opt/python/` instead of `/usr/bin/`. Set `ENV PYTHON=/opt/python/bin/python3` so that any Node.js packages requiring Python at runtime can locate the binary. If you omitted the Python OCI artifact, remove this `ENV` line. + +Since the customization includes only runtime libraries and OCI artifacts — no build tools, no package manager, no shell — the resulting image is distroless: + +``` +docker run --rm YOUR_ORG/dhi-node:24-alpine3.23_backstage sh -c "echo hello" +docker: Error response from daemon: ... exec: "sh": executable file not found in $PATH +``` + +With the Enterprise customization: + +- The runtime image is distroless — no shell, no package manager. +- Docker automatically rebuilds your customized image when the base Node.js image or the Python OCI artifact receives a security patch. +- The full chain of trust is maintained, including SLSA Build Level 3 provenance. +- Both the Node.js and Python runtimes are tracked in the image SBOM. + +Confirm the container no longer has shell access: + +``` +docker exec -it sh +OCI runtime exec failed: exec failed: unable to start container process: ... +``` + +Use [Docker Debug](#) if you need to troubleshoot a running distroless container. + +> **Note** +> +> If your organization requires FIPS/STIG compliant images, that's also an option in DHI Enterprise. + +## Step 5: Verify the results + +Compare the DHI-based image against the original using Docker Scout: + +``` +docker scout compare backstage:dhi \ + --to backstage:init \ + --platform linux/amd64 \ + --ignore-unchanged +``` + +A typical comparison across the approaches shows results similar to the following: + +| Metric | Original | DHI -dev | DHI -sfw-dev | Enterprise | +|--------|----------|----------|--------------|------------| +| Disk usage | 1.61 GB | 1.72 GB | 1.9 GB | 1.49 GB | +| Content size | 268 MB | 288 MB | 328 MB | 247 MB | +| Shell in runtime | Yes | Yes | Yes | No | +| Package manager | Yes | Yes | Yes | No | +| Non-root default | No | No | No | Yes | +| Socket Firewall | No | No | Yes (build) | No | +| SLSA provenance | No | Base only | Base only | Full (Level 3) | + +> **Note** +> +> The `-sfw-dev` variant is larger because Socket Firewall adds monitoring tooling to the image. This is expected — the additional size is in the build stages, and the security benefit during `yarn install` outweighs the size increase. + +For a more thorough assessment, scan with multiple tools: + +``` +trivy image backstage:dhi +grype backstage:dhi +docker scout quickview backstage:dhi +``` + +Different scanners detect different issues. Running all three gives you the most complete view of your security posture. + +## What's next + +- [Customize an image](/dhi/how-to/customize/) — complete reference on the Enterprise customization UI. +- [Create and build a DHI](/dhi/how-to/build/) — learn how to write a DHI definition file, build images locally. +- [Use the DHI CLI](/dhi/how-to/cli/) — manage DHI images, mirrors, and customizations from the command line. +- [Migrate to DHI](/dhi/migration/) — for applications that work with standard DHI images without additional packages. +- [Compare images](/dhi/how-to/compare/) — evaluate security improvements between your original and hardened images. +- [Docker Debug](/dhi/how-to/debug/) — troubleshoot distroless containers that have no shell. From 3bad0f3f8881ea51c9abb44f190da208a816127b Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 08:52:27 +0530 Subject: [PATCH 05/16] Delete content/guides/dhi-openshift.md --- content/guides/dhi-openshift.md | 624 -------------------------------- 1 file changed, 624 deletions(-) delete mode 100644 content/guides/dhi-openshift.md diff --git a/content/guides/dhi-openshift.md b/content/guides/dhi-openshift.md deleted file mode 100644 index b0e2ec87b8e6..000000000000 --- a/content/guides/dhi-openshift.md +++ /dev/null @@ -1,624 +0,0 @@ ---- -title: Use Docker Hardened Images with Red Hat OpenShift -description: Deploy Docker Hardened Images on Red Hat OpenShift Container Platform, covering Security Context Constraints, arbitrary user ID assignment, file permissions, and best practices. -keywords: docker hardened images, dhi, openshift, OCP, SCC, security context constraints, non-root, distroless, containers, red hat -tags: ["Docker Hardened Images", "dhi"] -params: - proficiencyLevel: Intermediate - time: 30 minutes - prerequisites: - - An OpenShift cluster (version 4.11 or later recommended) - - The oc CLI authenticated to your cluster - - A Docker Hub account with access to Docker Hardened Images - - Familiarity with OpenShift Security Context Constraints (SCCs) ---- - -Docker Hardened Images (DHI) can be deployed on Red Hat OpenShift Container -Platform, but OpenShift’s security model differs from standard Kubernetes in -ways that require specific configuration. Because OpenShift runs containers -with an arbitrarily assigned user ID rather than the image’s default, you must -adjust file ownership and group permissions in your Dockerfiles to ensure -writable paths remain accessible. - -This guide explains how to deploy Docker Hardened Images in OpenShift -environments, covering Security Context Constraints (SCCs), arbitrary user ID -assignment, file permission requirements, and best practices for both runtime -and development image variants. - -## How OpenShift security differs from Kubernetes - -OpenShift extends Kubernetes with Security Context Constraints (SCCs), which -control what actions a pod can perform and what resources it can access. While -vanilla Kubernetes uses Pod Security Standards (PSS) for similar purposes, SCCs -are more granular and enforced by default. - -The key differences that affect DHI deployments: - -**Arbitrary user IDs.** By default, OpenShift runs containers using an -arbitrarily assigned user ID (UID) from a range allocated to each project. The -default `restricted-v2` SCC (introduced in OpenShift 4.11) uses the -`MustRunAsRange` strategy, which overrides the `USER` directive in the container -image with a UID from the project’s allocated range (typically starting higher than -1000000000). This means even though a DHI image specifies a non-root user -(UID 65532), OpenShift will run the container as a different, unpredictable UID. - -**Root group requirement.** OpenShift assigns the arbitrary UID to the root -group (GID 0). The container process always runs with `gid=0(root)`. Any -directories or files that the process needs to write to must be owned by the -root group (GID 0) with group read/write permissions. This is documented in the -[Red Hat guidelines for creating images](https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html#use-uid_create-images). - -> [!IMPORTANT] -> -> DHI images set file ownership to `nonroot:nonroot` (65532:65532) by default. -> Because the OpenShift arbitrary UID is NOT in the `nonroot` group (65532), it -> cannot write to those files — even though the pod is admitted by the SCC and -> the container starts. You must change group ownership to GID 0 for any -> writable path. This is the most common source of permission errors when -> deploying DHI on OpenShift. - -**Capability restrictions.** The `restricted-v2` SCC drops all Linux -capabilities by default and enforces `allowPrivilegeEscalation: false`, -`runAsNonRoot: true`, and a `seccompProfile` of type `RuntimeDefault`. DHI -runtime images already satisfy these constraints because they run as a non-root -user and don’t require elevated capabilities. - -## Pull DHI images into OpenShift - -Before deploying, create an image pull secret so your OpenShift cluster can -authenticate to the DHI registry or your mirrored repository on Docker Hub. - -### Create an image pull secret - -```console -oc create secret docker-registry dhi-pull-secret \ - --docker-server=docker.io \ - --docker-username= \ - --docker-password= \ - --docker-email= -``` - -If you’re pulling directly from `dhi.io` instead of a mirrored repository, set -`--docker-server=dhi.io`. - -### Link the secret to a service account - -Link the pull secret to the `default` service account in your project so that -all deployments can pull DHI images automatically: - -```console -oc secrets link default dhi-pull-secret --for=pull -``` - -To use the secret with a specific service account instead: - -```console -oc secrets link dhi-pull-secret --for=pull -``` - -## Build OpenShift-compatible images from DHI - -DHI runtime images are distroless — they contain no shell, no package manager, -and no `RUN`-capable environment. This means you **cannot use `RUN` commands in -the runtime stage** of your Dockerfile. All file permission adjustments for -OpenShift must happen in the `-dev` build stage, and the results must be copied -into the runtime stage using `COPY --chown`. - -The core pattern for OpenShift compatibility: - -1. Use a DHI `-dev` variant as the build stage (it has a shell). -1. Build your application and set GID 0 ownership in the build stage. -1. Copy the results into the DHI runtime image using `COPY --chown=:0`. - -### Example: Nginx for OpenShift - -```dockerfile -# Build stage — has a shell, can run commands -FROM YOUR_ORG/dhi-nginx:1.29-alpine3.23-dev AS build - -# Copy custom config and set root group ownership -COPY nginx.conf /tmp/nginx.conf -COPY default.conf /tmp/default.conf - -# Prepare writable directories with GID 0 -# (Nginx needs to write to cache, logs, and PID file locations) -RUN mkdir -p /tmp/nginx-cache /tmp/nginx-run && \ - chgrp -R 0 /tmp/nginx-cache /tmp/nginx-run && \ - chmod -R g=u /tmp/nginx-cache /tmp/nginx-run - -# Runtime stage — distroless, NO shell, NO RUN commands -FROM YOUR_ORG/dhi-nginx:1.29-alpine3.23 - -COPY --from=build --chown=65532:0 /tmp/nginx.conf /etc/nginx/nginx.conf -COPY --from=build --chown=65532:0 /tmp/default.conf /etc/nginx/conf.d/default.conf -COPY --from=build --chown=65532:0 /tmp/nginx-cache /var/cache/nginx -COPY --from=build --chown=65532:0 /tmp/nginx-run /var/run -``` - -> [!IMPORTANT] -> -> Always use `--chown=:0` (user:root-group) when copying files into the -> runtime stage. This ensures the arbitrary UID that OpenShift assigns can -> access the files through root group membership. Never use `RUN` in the runtime -> stage — distroless DHI images have no shell. - -> [!NOTE] -> -> The UID for DHI images varies by image. Most use 65532 (`nonroot`), but some -> (like the Node.js image) may use a different UID. Verify with: -> `docker inspect dhi.io/: --format '{{.Config.User}}'` - -Deploy to OpenShift: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx-dhi -spec: - replicas: 1 - selector: - matchLabels: - app: nginx-dhi - template: - metadata: - labels: - app: nginx-dhi - spec: - containers: - - name: nginx - image: YOUR_ORG/dhi-nginx:1.29-alpine3.23 - ports: - - containerPort: 8080 - securityContext: - allowPrivilegeEscalation: false - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - capabilities: - drop: - - ALL - imagePullSecrets: - - name: dhi-pull-secret -``` - -DHI Nginx listens on port 8080 by default (not 80), which is compatible with -the non-root requirement. No SCC changes are needed. - -### Example: Node.js application for OpenShift - -```dockerfile -# Build stage — dev variant has shell and npm -FROM YOUR_ORG/dhi-node:24-alpine3.23-dev AS build -WORKDIR /app -COPY package*.json ./ -RUN npm ci -COPY . . -RUN npm run build - -# Set GID 0 on everything the runtime needs to write -RUN chgrp -R 0 /app/dist /app/node_modules && \ - chmod -R g=u /app/dist /app/node_modules - -# Runtime stage — distroless, NO shell -FROM YOUR_ORG/dhi-node:24-alpine3.23 -WORKDIR /app -COPY --from=build --chown=65532:0 /app/dist ./dist -COPY --from=build --chown=65532:0 /app/node_modules ./node_modules -CMD ["node", "dist/index.js"] -``` - -Deploy: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: node-app -spec: - replicas: 2 - selector: - matchLabels: - app: node-app - template: - metadata: - labels: - app: node-app - spec: - containers: - - name: app - image: YOUR_ORG/dhi-node-app:latest - ports: - - containerPort: 3000 - securityContext: - allowPrivilegeEscalation: false - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - capabilities: - drop: - - ALL - imagePullSecrets: - - name: dhi-pull-secret -``` - -## Handle arbitrary user IDs - -OpenShift’s `restricted-v2` SCC assigns a random UID to the container process. -This UID won’t exist in `/etc/passwd` inside the image, but the container will -still run — the process just won’t have a username associated with it. - -This can cause issues with applications that: - -- Look up the current user’s home directory or username -- Write to directories owned by a specific UID -- Check `/etc/passwd` for the running user - -### Add a `passwd` entry for the arbitrary UID - -Some applications (notably those using certain Python or Java libraries) require -a valid `/etc/passwd` entry for the running user. You can handle this with a -wrapper entrypoint script. - -Because this pattern requires a shell, it only works with DHI `-dev` variants or -with a DHI Enterprise customized image that includes a shell. Prepare the image -in the build stage: - -```dockerfile -FROM YOUR_ORG/dhi-python:3.13-alpine3.23-dev AS build -# ... build your application ... - -# Make /etc/passwd group-writable so the entrypoint can append to it -RUN chgrp 0 /etc/passwd && chmod g=u /etc/passwd - -# Create the entrypoint wrapper -RUN printf '#!/bin/sh\n\ -if ! whoami > /dev/null 2>&1; then\n\ - if [ -w /etc/passwd ]; then\n\ - echo "${USER_NAME:-appuser}:x:$(id -u):0:dynamic user:/tmp:/sbin/nologin" >> /etc/passwd\n\ - fi\n\ -fi\n\ -exec "$@"\n' > /entrypoint.sh && chmod +x /entrypoint.sh - -# This pattern requires a -dev variant as runtime (has shell) -FROM YOUR_ORG/dhi-python:3.13-alpine3.23-dev -COPY --from=build --chown=65532:0 /app ./app -COPY --from=build --chown=65532:0 /entrypoint.sh /entrypoint.sh -COPY --from=build --chown=65532:0 /etc/passwd /etc/passwd -USER 65532 -ENTRYPOINT ["/entrypoint.sh"] -CMD ["python", "app/main.py"] -``` - -> [!NOTE] -> -> For distroless runtime images (no shell), the passwd-injection pattern is not -> possible. Instead, use the `nonroot` SCC (described in the following section) to run with the -> image’s built-in UID so the existing `/etc/passwd` entry matches the running -> process. Alternatively, OpenShift 4.x automatically injects the arbitrary UID -> into `/etc/passwd` in most cases, which resolves this for many applications. - -## Use the non-root SCC for fixed UIDs - -If your application requires running as the specific UID defined in the image -(typically 65532 for DHI), you can use the `nonroot` SCC instead of the default -`restricted-v2`. The `nonroot` SCC uses the `MustRunAsNonRoot` strategy, which -allows any non-zero UID. - -> [!IMPORTANT] -> -> For the `nonroot` SCC to work, the image’s `USER` directive must specify a -> **numeric** UID (for example, `65532`), not a username string like `nonroot`. -> OpenShift cannot verify that a username maps to a non-zero UID. Verify your -> DHI image with: -> `docker inspect YOUR_ORG/dhi-node:24-alpine3.23 --format '{{.Config.User}}'` -> If the output is a string rather than a number, set `runAsUser` explicitly in -> the pod spec. - -Create a service account and grant it the `nonroot` SCC: - -```console -oc create serviceaccount dhi-nonroot -oc adm policy add-scc-to-user nonroot -z dhi-nonroot -``` - -Reference the service account in your deployment: - -```yaml -spec: - template: - spec: - serviceAccountName: dhi-nonroot - containers: - - name: app - image: YOUR_ORG/dhi-node:24-alpine3.23 - securityContext: - runAsUser: 65532 - runAsNonRoot: true - allowPrivilegeEscalation: false - seccompProfile: - type: RuntimeDefault - capabilities: - drop: - - ALL -``` - -Verify the SCC assignment after deployment: - -```console -oc get pod -o jsonpath='{.metadata.annotations.openshift\.io/scc}' -``` - -This should return `nonroot`. - -When using the `nonroot` SCC with a fixed UID, the process runs as 65532 -(matching the image’s file ownership), so the GID 0 adjustments are not strictly -required for paths already owned by 65532. However, applying `chown :0` is -still recommended for portability across both `restricted-v2` and `nonroot` SCCs. - -## Use DHI dev variants in OpenShift - -DHI `-dev` variants include a shell, package manager, and development tools. -They run as root (UID 0) by default, which conflicts with OpenShift’s -`restricted-v2` SCC. There are three approaches: - -### Option 1: Use dev variants only in build stages (recommended) - -Use `-dev` variants only in Dockerfile build stages and never deploy them -directly to OpenShift: - -```dockerfile -FROM YOUR_ORG/dhi-node:24-alpine3.23-dev AS build -WORKDIR /app -COPY package*.json ./ -RUN npm ci -COPY . . -RUN npm run build - -# Set root group ownership for OpenShift compatibility -RUN chgrp -R 0 /app/dist /app/node_modules && \ - chmod -R g=u /app/dist /app/node_modules - -FROM YOUR_ORG/dhi-node:24-alpine3.23 -WORKDIR /app -COPY --from=build --chown=65532:0 /app/dist ./dist -COPY --from=build --chown=65532:0 /app/node_modules ./node_modules -CMD ["node", "dist/index.js"] -``` - -The final runtime image is non-root and distroless, fully compatible with -`restricted-v2`. - -### Option 2: Grant the `anyuid` SCC for debugging - -If you need to run a `-dev` variant directly in OpenShift for debugging, grant -the `anyuid` SCC to a dedicated service account: - -```console -oc create serviceaccount dhi-debug -oc adm policy add-scc-to-user anyuid -z dhi-debug -``` - -Then reference it in your pod: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: dhi-debug -spec: - serviceAccountName: dhi-debug - containers: - - name: debug - image: YOUR_ORG/dhi-node:24-alpine3.23-dev - command: ["sleep", "infinity"] - imagePullSecrets: - - name: dhi-pull-secret -``` - -> [!IMPORTANT] -> -> The `anyuid` SCC allows running as any UID including root. Only use this for -> temporary debugging — never in production workloads. - -### Option 3: Use `oc debug` or ephemeral containers - -For distroless runtime images with no shell, use OpenShift-native debugging -tools instead of `docker debug` (which only works with Docker Engine, not with -CRI-O on OpenShift). - -Use `oc debug` to create a copy of a pod with a debug shell: - -```console -# Create a debug pod based on a deployment -oc debug deployment/nginx-dhi - -# Override the image to use a -dev variant with a shell -oc debug deployment/nginx-dhi --image=YOUR_ORG/dhi-node:24-alpine3.23-dev -``` - -Use ephemeral containers (OpenShift 4.12+ / Kubernetes 1.25+): - -```console -kubectl debug -it --image=YOUR_ORG/dhi-node:24-alpine3.23-dev \ - --target=app -- sh -``` - -This attaches a temporary debug container to a running pod without restarting -it, sharing the pod’s process namespace. - -> [!NOTE] -> -> `docker debug` is a Docker Desktop/CLI feature for local development. It is -> not available on OpenShift clusters, which use CRI-O as their container -> runtime. - -## Deploy DHI Helm charts on OpenShift - -DHI provides pre-configured Helm charts for popular applications. When deploying -these charts on OpenShift, you may need to adjust security context settings. - -### Inspect chart values first - -Before installing, check what security context values the chart exposes: - -```console -helm registry login dhi.io - -helm show values oci://dhi.io/ --version | grep -A 20 securityContext -``` - -The available value paths vary by chart, so always check `values.yaml` before -setting overrides. - -### Install with OpenShift overrides - -The following example shows a typical installation pattern. Adjust the `--set` -paths based on what `helm show values` returns for your specific chart: - -```console -helm install my-release oci://dhi.io/ \ - --version \ - --set "imagePullSecrets[0].name=dhi-pull-secret" \ - -f openshift-values.yaml -``` - -Create an `openshift-values.yaml` with security context overrides appropriate -for your chart: - -```yaml -# Example — adjust keys based on `helm show values` output -podSecurityContext: - runAsNonRoot: true - seccompProfile: - type: RuntimeDefault - -securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: - - ALL -``` - -> [!NOTE] -> -> DHI Helm chart value paths are not standardized across charts. For example, -> one chart may use `image.imagePullSecrets`, while another uses -> `global.imagePullSecrets`. Always consult the specific chart’s documentation -> or `values.yaml`. - -## Verify your deployment - -After deploying a DHI image to OpenShift, verify the security configuration. - -### Check the assigned SCC - -```console -oc get pods -o 'custom-columns=NAME:.metadata.name,SCC:.metadata.annotations.openshift\.io/scc' -``` - -Runtime DHI images should show `restricted-v2` (or `nonroot` if you configured -it). - -### Check the running UID - -```console -oc exec -- id -``` - -With the `restricted-v2` SCC, you should see output like: - -```text -uid=1000650000 gid=0(root) groups=0(root),1000650000 -``` - -The UID is from the project’s allocated range, and the primary GID is always 0 -(root group). With the `nonroot` SCC and `runAsUser: 65532`, you would see -`uid=65532`. - -### Confirm the image is distroless - -```console -oc exec -- sh -c "echo hello" -``` - -For runtime (non-dev) DHI images, this command should fail with an error -indicating that `sh` was not found in `$PATH`. The exact error format varies -between CRI-O versions. - -### Scan the deployed image - -Use Docker Scout to verify the security posture of the deployed image (run this -from your local machine, not on the cluster): - -```console -docker scout cves YOUR_ORG/dhi-nginx:1.29-alpine3.23 -docker scout quickview YOUR_ORG/dhi-nginx:1.29-alpine3.23 -``` - -## Common issues and solutions - -**Pod fails to start with “container has runAsNonRoot and image has group or -user ID set to root.”** This happens when deploying a DHI `-dev` variant with -the default `restricted-v2` SCC. Either use the runtime variant instead, or -grant the `anyuid` SCC to the service account. - -**Application cannot write to a directory.** The arbitrary UID assigned by -OpenShift doesn’t have write permissions. This is the most common issue with DHI -on OpenShift. All writable paths must be owned by GID 0 with group write -permissions. Fix this in the build stage: -`chgrp -R 0 /path && chmod -R g=u /path`, then `COPY --chown=:0` into the -runtime stage. - -**Application fails with “user not found” or “no matching entries in passwd -file.”** Some applications require a valid `/etc/passwd` entry. OpenShift 4.x -automatically injects the arbitrary UID into `/etc/passwd` in most cases. If -your application still fails, use the passwd-injection pattern (requires a `-dev` -variant) or use the `nonroot` SCC to run with the image’s built-in UID. - -**Pod fails to bind to port 80 or 443.** Ports lower than 1024 require root -privileges. DHI images use unprivileged ports by default (for example, Nginx -uses 8080). Configure your OpenShift Service to map the external port to the -container’s unprivileged port: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: nginx-dhi -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - app: nginx-dhi -``` - -**ImagePullBackOff with “unauthorized: authentication required.”** Verify the -pull secret is correctly configured and linked to the service account. Check -with `oc get secret dhi-pull-secret` and `oc describe sa default`. - -**Dockerfile build fails with “exec: not found” in runtime stage.** You are -using `RUN` in a distroless runtime stage. DHI runtime images have no shell, so -`RUN` commands cannot execute. Move all `RUN` commands to the `-dev` build stage -and use `COPY --chown` to transfer results. - -## DHI and OpenShift compatibility summary - -|Feature |DHI runtime |DHI `-dev` |DHI with Enterprise customization| -|-----------------------------|---------------------------------|--------------------|---------------------------------| -|Default SCC (`restricted-v2`)|Yes, with GID 0 permissions |Requires `anyuid` |Yes, with GID 0 permissions | -|Non-root by default |Yes (UID 65532) |No (root) |Yes (configurable UID) | -|Arbitrary UID support |Yes, with `chown :0` |Yes |Yes, with `chown :0` | -|Distroless (no shell) |Yes — no `RUN` in Dockerfile |No |Yes — no `RUN` in Dockerfile | -|Unprivileged ports |Yes (higher than 1024) |Configurable |Yes (higher than 1024) | -|SLSA Build Level 3 |Yes |Yes |Yes | -|Debug on cluster |`oc debug` / ephemeral containers|`oc exec` with shell|`oc debug` / ephemeral containers| - -## What’s next - -- [Use an image in Kubernetes](/dhi/how-to/k8s/) — general DHI Kubernetes deployment guide. -- [Customize an image](/dhi/how-to/customize/) — add packages to DHI images using Enterprise customization. -- [Debug a container](/dhi/how-to/debug/) — troubleshoot distroless containers with Docker Debug (local development). -- [Managing SCCs](https://docs.openshift.com/container-platform/4.14/authentication/managing-security-context-constraints.html) — Red Hat’s reference documentation on Security Context Constraints. -- [Creating images for OpenShift](https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html) — Red Hat’s guidelines for building OpenShift-compatible container images. From 61f1174c99d92abcd4b5a62203090d86c602bae9 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 08:55:57 +0530 Subject: [PATCH 06/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 21ff7f0bde22..00c5777804b5 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -117,7 +117,7 @@ docker run -d \ This works, but the runtime container has a shell, a package manager, and yarn. None of these are needed to run Backstage. Run `docker exec` to see what's accessible inside: -``` +```console docker exec -it sh $ cat /etc/shells # /etc/shells: valid login shells @@ -195,7 +195,7 @@ CMD ["node", "packages/backend", "--config", "app-config.yaml"] Build and tag this version: -``` +```console docker build -t backstage:dhi-dev . ``` @@ -205,7 +205,7 @@ docker build -t backstage:dhi-dev . The DHI images come with attestations that the original `node:24-trixie-slim` images don't have. Check what's attached: -``` +```console docker scout attest list dhi.io/node:24-alpine3.23 ``` @@ -268,7 +268,7 @@ CMD ["node", "packages/backend", "--config", "app-config.yaml"] Build this version: -``` +```console docker build -t backstage:dhi-sfw-dev . ``` @@ -311,7 +311,7 @@ For more information, see [Customize an image](#). Rather than writing the customization YAML by hand, use `dhictl` to scaffold a starting point: -``` +```console dhictl customization prepare --org YOUR_ORG node 24-alpine3.23 \ --destination YOUR_ORG/dhi-node \ --name "backstage" \ @@ -361,13 +361,13 @@ cmd: Then create the customization: -``` +```console dhictl customization create --org YOUR_ORG node-backstage.yaml ``` Monitor the build progress: -``` +```console dhictl customization build list --org YOUR_ORG YOUR_ORG/dhi-node "backstage" ``` @@ -397,7 +397,7 @@ CMD ["node", "packages/backend", "--config", "app-config.yaml"] Since the customization includes only runtime libraries and OCI artifacts — no build tools, no package manager, no shell — the resulting image is distroless: -``` +```console docker run --rm YOUR_ORG/dhi-node:24-alpine3.23_backstage sh -c "echo hello" docker: Error response from daemon: ... exec: "sh": executable file not found in $PATH ``` @@ -411,7 +411,7 @@ With the Enterprise customization: Confirm the container no longer has shell access: -``` +```console docker exec -it sh OCI runtime exec failed: exec failed: unable to start container process: ... ``` @@ -426,7 +426,7 @@ Use [Docker Debug](#) if you need to troubleshoot a running distroless container Compare the DHI-based image against the original using Docker Scout: -``` +```console docker scout compare backstage:dhi \ --to backstage:init \ --platform linux/amd64 \ @@ -451,7 +451,7 @@ A typical comparison across the approaches shows results similar to the followin For a more thorough assessment, scan with multiple tools: -``` +```console trivy image backstage:dhi grype backstage:dhi docker scout quickview backstage:dhi From f33e24b373ffaa0ad6052afb7998c96630a25d9c Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 08:58:47 +0530 Subject: [PATCH 07/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 00c5777804b5..2f5ae8cc11cd 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -101,7 +101,7 @@ CMD ["node", "packages/backend", "--config", "app-config.yaml"] Run this image and inspect what's available inside the container: -``` +```console docker build -t backstage:init . docker run -d \ -e APP_CONFIG_backend_database_client='better-sqlite3' \ From 8db13a5c944fe25ad4f3a1e2bee891c5cc3d7f3c Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Sat, 7 Mar 2026 09:01:49 +0530 Subject: [PATCH 08/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 2f5ae8cc11cd..ed69fa0924e6 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -303,7 +303,7 @@ After you mirror the Node.js DHI repository to your organization's namespace: 4. Under **OCI artifacts**, select your mirrored `dhi-python` repository and include the `/opt/python` path to layer the Python runtime into the image. 5. Create the customization. -For more information, see [Customize an image](#). +For more information, see [Customize an image](/dhi/how-to/customize/). ### Using the `dhictl` CLI @@ -416,7 +416,7 @@ docker exec -it sh OCI runtime exec failed: exec failed: unable to start container process: ... ``` -Use [Docker Debug](#) if you need to troubleshoot a running distroless container. +Use [Docker Debug](/dhi/how-to/debug/) if you need to troubleshoot a running distroless container. > **Note** > From cb81dc2102f94b9b5402f075f0ba41cd3b262b5f Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 19:50:34 +0530 Subject: [PATCH 09/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index ed69fa0924e6..3ac19bad7cc8 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -289,7 +289,7 @@ The previous steps still use the `-dev` or `-sfw-dev` variant as the runtime ima For Backstage, the runtime image needs: - **sqlite-libs** — the shared library that the compiled `better-sqlite3` native module links against (added as a system package). -- **Python** — if your Backstage plugins or configuration require Python at runtime. Added as an OCI artifact using the hardened `dhi.io/python` image, which layers the Python runtime onto the Node.js base without introducing a package manager or shell. +- **Python** — if your Backstage plugins or configuration require Python at runtime. Added as the `python-3.14` system package, which installs Python from the hardened DHI package feed. Docker will continuously build with SLSA Level 3 compliance and patch these customized images within the guaranteed SLA for CVE patching. @@ -299,9 +299,8 @@ After you mirror the Node.js DHI repository to your organization's namespace: 1. Open the mirrored Node.js repository in Docker Hub. 2. Select **Customize** and choose the `node:24-alpine3.23` tag. -3. Under **Packages**, add `sqlite-libs`. -4. Under **OCI artifacts**, select your mirrored `dhi-python` repository and include the `/opt/python` path to layer the Python runtime into the image. -5. Create the customization. +3. Under **Packages**, add `sqlite-libs` and `python-3.14`. +4. Create the customization. For more information, see [Customize an image](/dhi/how-to/customize/). @@ -337,10 +336,7 @@ platforms: contents: packages: - sqlite-libs - artifacts: - - name: YOUR_ORG/dhi-python:3.14-alpine3.23 - includes: - - /opt/python + - python-3.14 accounts: root: true @@ -384,7 +380,6 @@ Update only the final stage of your Dockerfile to use the customized image: ```dockerfile # Final Stage: create the runtime image FROM YOUR_ORG/dhi-node:24-alpine3.23_backstage -ENV PYTHON=/opt/python/bin/python3 WORKDIR /app COPY --from=build --chown=node:node /app/node_modules ./node_modules COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./ From 9d7fb1aeb27b06385b1dff100c9d5064717e37c3 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 19:53:06 +0530 Subject: [PATCH 10/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 3ac19bad7cc8..6f7a6f83eb69 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -371,7 +371,7 @@ Docker builds the customized image on its secure infrastructure and publishes it > **Note** > -> If your Backstage configuration does not require Python at runtime, you can omit the `artifacts` and `environment` sections from the YAML. The `sqlite-libs` package alone is sufficient to run Backstage with `better-sqlite3`. +> If your Backstage configuration does not require Python at runtime, you can omit the `python-3.14` from the packages list. The `sqlite-libs` package alone is sufficient to run Backstage with `better-sqlite3`. ### Updated Dockerfile From fdbe2ae7cb25006faf15d757a2d765b254d291ee Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 19:54:00 +0530 Subject: [PATCH 11/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 6f7a6f83eb69..2e07eee22797 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -386,10 +386,6 @@ COPY --from=build --chown=node:node /app/packages/backend/dist/bundle/ ./ CMD ["node", "packages/backend", "--config", "app-config.yaml"] ``` -> **Important** -> -> When the Python runtime is added as an OCI artifact, it installs under `/opt/python/` instead of `/usr/bin/`. Set `ENV PYTHON=/opt/python/bin/python3` so that any Node.js packages requiring Python at runtime can locate the binary. If you omitted the Python OCI artifact, remove this `ENV` line. - Since the customization includes only runtime libraries and OCI artifacts — no build tools, no package manager, no shell — the resulting image is distroless: ```console From b82b97614e1ee2cb9d68459b2af6c1baf79bc2f8 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 19:54:38 +0530 Subject: [PATCH 12/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 2e07eee22797..37f7b2e26acc 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -348,9 +348,6 @@ accounts: - name: node gid: 1000 -environment: - PYTHON: /opt/python/bin/python3 - cmd: - node ``` From 428174d9107bd09e5110fb52f8db5462397db301 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 19:57:59 +0530 Subject: [PATCH 13/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 37f7b2e26acc..ea69ffff306f 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -318,7 +318,7 @@ dhictl customization prepare --org YOUR_ORG node 24-alpine3.23 \ --output node-backstage.yaml ``` -Edit the generated file to add the runtime library and the Python OCI artifact: +Edit the generated file to add the runtime libraries: ```yaml name: backstage @@ -393,7 +393,7 @@ docker: Error response from daemon: ... exec: "sh": executable file not found in With the Enterprise customization: - The runtime image is distroless — no shell, no package manager. -- Docker automatically rebuilds your customized image when the base Node.js image or the Python OCI artifact receives a security patch. +- Docker automatically rebuilds your customized image when the base Node.js image or any of its packages receive a security patch. - The full chain of trust is maintained, including SLSA Build Level 3 provenance. - Both the Node.js and Python runtimes are tracked in the image SBOM. From 916d15bf03819e8360e75d2597de9154b02cd3f2 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 20:00:40 +0530 Subject: [PATCH 14/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index ea69ffff306f..226dd4750167 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -1,6 +1,7 @@ --- title: Secure a Backstage application with Docker Hardened Images description: Secure a Backstage developer portal container using Docker Hardened Images, covering native module compilation, multi-stage builds, Socket Firewall protection, and distroless runtime images. +summary: Learn how to secure a Backstage developer portal using Docker Hardened Images (DHI), handle native module compilation with better-sqlite3, add Socket Firewall protection during dependency installation, and produce a distroless runtime image using DHI Enterprise customizations. keywords: docker hardened images, dhi, backstage, CNCF, developer portal, node.js, native modules, sqlite, better-sqlite3, distroless, socket firewall, dhictl, multi-stage build tags: ["Docker Hardened Images", "dhi"] params: From 2ab1c435f535f26d22dfa7c43c23323e7ea34467 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 20:57:59 +0530 Subject: [PATCH 15/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 226dd4750167..276958508016 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -349,8 +349,6 @@ accounts: - name: node gid: 1000 -cmd: - - node ``` Then create the customization: From c7adae09a1a1933a492bcfa48d59549f9b169702 Mon Sep 17 00:00:00 2001 From: "Ajeet Singh Raina, Docker Captain, ARM Innovator" Date: Tue, 10 Mar 2026 21:00:43 +0530 Subject: [PATCH 16/16] Update dhi-backstage.md --- content/guides/dhi-backstage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/guides/dhi-backstage.md b/content/guides/dhi-backstage.md index 276958508016..6aa2fd319334 100644 --- a/content/guides/dhi-backstage.md +++ b/content/guides/dhi-backstage.md @@ -429,7 +429,7 @@ A typical comparison across the approaches shows results similar to the followin | Shell in runtime | Yes | Yes | Yes | No | | Package manager | Yes | Yes | Yes | No | | Non-root default | No | No | No | Yes | -| Socket Firewall | No | No | Yes (build) | No | +| Socket Firewall | No | No | Yes (build) | Yes (build) / No (runtime) | | SLSA provenance | No | Base only | Base only | Full (Level 3) | > **Note**