From 8aa63cd866c8c987c14eabf4b6626ecb41c47423 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Fri, 8 May 2026 17:34:23 +0200 Subject: [PATCH 01/12] Security advisory for Linux LPEs copy.fail and Dirty Frag. Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 234 ++++++++++++++++++++++++ 1 file changed, 234 insertions(+) create mode 100644 blog/2026-05-10-kernel-root-exploits.md diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md new file mode 100644 index 0000000000..db7459d104 --- /dev/null +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -0,0 +1,234 @@ +--- +slug: kernel_local_root_exploits +title: Linux Kernel local root exploits CVE-2026-31431, -43284, -43500 +authors: [garloff] +tags: [security, linux, cve, copy.fail, dirtyfrag] +--- + +## Linux root exploits (Local Privilege Escalation) copy.fail and Dirty Frag + +Unix is designed as a multi-user system. Different users have their own +files and processes and can work without interference from others. +Linux lives in that tradition. It has advanced the concept with namespaces +where users can also have a private view on networking, process list, filesystems +and other pieces that are traditionally shared (read-only) on a Unix system, +also including some resource management to enhance performance isolation. + +It is the operating system's kernel's job to keep the separation safe; in +particular, normal users must not achieve the system administrator (root) +privileges. Where the kernel fails to ensure this, we have a "local root" +vulnerability, a Local Privilege Escalation (LPE). + +The Linux kernel is a large and a complex beast. On one hand it has sophisticated +mechanisms to get really good performance out of increasingly complex hardware. +On the other hand, it comes with a huge variety of device drivers. From time to +time, vulnerabilities are found, reported and fixed. The Linux kernel has several +LPEs per year. Most of the time, they affect only a small fraction of users +(typically by being located in a device driver or somewhat exotic feature) +and often they are hard to exploit, needing to win a race condition with +many attempts and sometimes causing crashes in trying (which may not go unnoticed). + +We don't report about these LPEs. They get fixed by the upstream Linux kernel +developers, shipped as stable updates by the maintainers and shipped to the end +users via kernel updates from the Linux distributors. + +The currently highly visible Linux kernel issues [copy.fail](https://copy.fail/) +and [Dirty Frag](https://github.com/V4bel/dirtyfrag) are both LPEs (local root +vulnerabilities). The reason we report about them is that they both affect +most Linux users and are easy to exploit. + +Like [Dirty Pipe](https://dirtypipe.cm4all.com/) and before +[Dirty Cow](https://dirtycow.ninja/), both LPEs rely on improper protection +of the page cache. +The Linux kernel keeps contents from file systems in the page cache; when code +gets executed, it is mapped into your virtual memory. When the memory page is +accessed and not yet loaded into your physical memory, a page fault occurs and +the relevant blocks are loaded from disk — or the access is denied and your +program receives `SIGSEGV` and is terminated. Copying pages is costly and the +kernel avoids it to achieve higher performance. If you write to a memory page, +the kernel may receive a page fault on a read-only mapping (that it created to +avoid copying) and only then do the copy to create a private writable copy. +This approach is called copy-on-write (COW) and is common in modern operating +systems. If a page from the page cache is changed in memory, it is also marked +"dirty", so the kernel knows it needs to write the changes back to the file system. + +In copy.fail, the `aead` crypto module did some cryptography in place, avoiding +the need to allocate an extra buffer. Unfortunately, it requires 4 extra bytes +under some conditions; normally aead is used by IPsec and that location is a +designate place in a network buffer. However, a local attacker can make this +write happen to a page cache page by using `splice`. This way, the copy of the +`sudo` binary in the page cache can be overwritten, allowing to circumvent the +safeguards there. The attacker can trivially become root — as the page is not +dirtied, no trace of the corruption will be visible on the disk. +[copy.fail](https://copy.fail/) has been assigned CVE-2026-31431. + +In Dirty Frag, a network buffer that is split over several fragments is not +properly handled and the fragmented buffer is not properly COW'ed. The AEAD +crypto operation then again overwrites 4 bytes. A local attacker can trigger +this again become root very quickly by overwriting the page cache's view of +`sudo`. (Of course other sensitive binary code could be overwritten in memory.) +This can be triggered via the IPsec `esp_input` (for both IPv4 and IPv6) as well +as via the `rxrpc` code. The esp variant requires the privilege to create user +namespaces and then allows for easy 4 byte writes at a time. It has been assigned +CVE-2026-43284. The rxrpc variant overwrites 8 bytes, but as these are crypted, +the user needs to brute force them in order to achieve a controlled result. This +variant was assigned CVE-2026-43500. + +_Exploiting this vulnerability requires access to the system and the ability +to execute code there, thus the categorization as LPE, not RCE (remote +code execution)._ + +## Impact + +Any system where normal (non-root) users can log in to execute code under their +own control is no longer secure: The users can use the publicly available +exploits to gain root privileges and get access to whatever the (virtual) +machine has access to. This means accessing other user's data as well as secrets +that are stored by the system administrator. + +Such systems are less common these days than they were 20 years ago. The reason +is that virtualization has become a commodity, so individual users may use their +own virtual machine rather than having access to a shared (virtual) machine +in many scenarios. + +Note that this vulnerability does NOT break the isolation of virtual machines. +VMs remain as securely isolated as they would be without this vulnerability. +These LPEs do NOT establish a virtualization escape. + +There is however a common scenario where individual users and workloads +are running inside a container. The LPE also allows for escaping containers. +Running a shell inside a kubernetes pod allows you to get control of the +kubernetes node and thus of everything that your kubernetes cluster has +access to. Running untrusted code in a container is thus very risky — something +that will affect e.g. CI setups. + +## Fixes + +A fix to the Linux kernel for Copy.fail was silently merged at the end of March +2026 (for 7.0-rc7) and also been merged to the stable kernel series (6.18.22, +6.12.85, 6.6.137). +It just disables the in-place optimization for `algif_aed`. As of early May, +Linux distributors are currently underway to ship fixed kernels. +Without a fixed kernel, a workaround is to place a file `copyfail.conf` in +`/etc/modprobe.d/` with the contents: + +```text +# Temporary workaround for copy.fail CVE-2026-31431 +install algif_aead /bin/false +``` + +The fixes for Dirty Frag are still in development as of May 8. The first fixes +have been merged upstream and released in 7.0.5, 6.18.28, 6.12.87, 6.6.138 but +there is [more to come](https://lwn.net/ml/all/2026050859-ahead-anchovy-05e2@gregkh/). +The responsible disclosure process for Dirty Frag was unfortunately broken, +so the upstream maintainers and the distributors this time did not have time +to carefully prepare and test fixes ahead of the publication of the issue. +So we have to expect that it will take a few days until all Linux distributor +manage to ship tested fixed kernels. + +A fully effective workaround is again to prevent loading the affected modules +by placing another file `dirtyfrag.conf` in `/etc/modprobe.d/`: + +```text +# Temporary workaround for Dirty Frag CVE-2026-43284, CVE-2026-43500 +# This breaks IPsec +install esp4 /bin/false +install esp6 /bin/false +install rxrpc /bin/false +``` + +Note that these workarounds prevent IPsec from working. + +If a system is suspected to already have been exploited, the system owner can +dispose of the page cache by doing `echo 3 > /proc/sys/vm/drop_caches` as root +and unload the affected modules to prevent re-exploitation. +This will discard the modified page cache pages — however an attacker could have +used its gained privileges to install further backdoors etc. into the system, so +it will need to be reinstalled or fully audited to be considered trustable again. + +## SCS IaaS Cloud Provider exposure + +None of the control-plane / management systems in a normal SCS cloud infrastructure +can be logged in by normal users. The LPE thus can not be exploited. However, +should another exploit be found and used successfully, the LPEs may be used +to escalate privileges further, e.g. breaking out of the containers that run +the OpenStack services or Ceph or some of the management tools and thus remove +one layer of a defense-in-depth concept. + +Cloud Providers are advised to install updated kernels to reestablish the defense. +They can apply the module loading prevention measures in the meantime. Providers +are advised to use this with care on the network nodes — if these need to support +IPsec (e.g. for OpenStack's VPNaaS which is part of neutron), the non-loadable +modules may prevent correct operation. Please note that there is no known remote +exploit via IPsec, so a temporary trade-off to live without the defense-in-depth +and not break IPsec (and this way create security and functionality issues or for +customers) may be justified. + +Cloud providers often provide VM images for their customers. +To support the customers to keep the security separation in the customer's VMs, +they are advised to watch out for the availability of new distribution images +and provide them short-term via their image service (glance). + +## SCS Kubernetes Provider exposure + +The default implementation with SCS Cluster Stacks is vulnerable; the current +node images have a kernel that is affected by this weakness. This allows a user +to break out of the containers running in the cluster to take over the node +VM and other containers. With Cluster-API and the SCS Cluster Stacks building +on them, creating, updating and removing Kubernetes clusters has become +a commodity; it is thus normal to create clusters per development team and +not share them. In this scenario, the break out may allow a developer to +take over containers from his team mates which is not a real danger in many +setups. For cluster setups across teams or worse for setups where several +clusters that belong to different entities share a control plane, this becomes +more serious. + +Note that the LPE also removes a defense-in-depth mechanism, where a user of +a service running in a k8s cluster exploits a vulnerability to be able to +execute code in the container — the LPEs can then be used to escalate the +privileges further. + +As soon as new kernels become available, the node images will be rebuilt and +shipped with the next cluster stack patch releases. For users, the normal +rolling upgrade will then be all that's needed to be secure against this LPE +again. + +We will update this advisory as soon as new node images are available. + +For highly critical workloads, cluster operators can log in to the nodes +and deploy the mechanisms to prevent loading the above-mentioned modules. +(Again, this will break IPsec.) Note that logging in to nodes in an SCS +Cluster Stack cluster is not possible by default; it requires booting +into a rescue image (if the cluster runs on OpenStack) to inject an ssh +key or to use a tool like kubectl-node-shell with the appropriate +privileges. + +## SCS Cloud users + +Customers of SCS IaaS clouds are responsible for their own VMs. For VMs +that are exposed, they should use the documented workaround inside their VMs, +online-update and reboot into a fixed kernel or redeploy their VMs based +on a fixed upstream image. + +Customers that do their own Kubernetes Container Cluster Management +with e.g. SCS Cluster Stacks are advised to watch out for new node +images and then perform the rolling upgrade. If their use scenario puts +them at increased risk, they are advised to prevent the module loading +in the meantime, as advised above. + +## Thanks + +The authors would like to thank Taeyang Lee at Xint (who initiated the +research on copy.fail) and Hyunwoo Kim (@v4bel, who discovered Dirty Frag). +They would also like to thank the upstream Linux kernel maintainers and +Linux distributors for their reliable work no handling the issues and +getting fixes out. + +## Sovereign Cloud Stack Security Contact + +SCS security contact is [security@scs.community](mailto:security@scs.community), as published on +[https://scs.community/.well-known/security.txt](https://scs.community/.well-known/security.txt). + +## Version history + +- Initial Draft, v0.1, 2026-05-08, 17:15 CEST. From 07fe30109e8bf3dc140bc570299a790c225f44ff Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 12:52:48 +0200 Subject: [PATCH 02/12] Instructions how to use kubectl node-shell Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md index db7459d104..ad26c2480d 100644 --- a/blog/2026-05-10-kernel-root-exploits.md +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -28,14 +28,14 @@ LPEs per year. Most of the time, they affect only a small fraction of users and often they are hard to exploit, needing to win a race condition with many attempts and sometimes causing crashes in trying (which may not go unnoticed). -We don't report about these LPEs. They get fixed by the upstream Linux kernel +We don't normally report about these LPEs. They get fixed by the upstream Linux kernel developers, shipped as stable updates by the maintainers and shipped to the end users via kernel updates from the Linux distributors. The currently highly visible Linux kernel issues [copy.fail](https://copy.fail/) and [Dirty Frag](https://github.com/V4bel/dirtyfrag) are both LPEs (local root vulnerabilities). The reason we report about them is that they both affect -most Linux users and are easy to exploit. +most Linux users (with kernels from the last 9 years) and are easy to exploit. Like [Dirty Pipe](https://dirtypipe.cm4all.com/) and before [Dirty Cow](https://dirtycow.ninja/), both LPEs rely on improper protection @@ -70,13 +70,14 @@ this again become root very quickly by overwriting the page cache's view of This can be triggered via the IPsec `esp_input` (for both IPv4 and IPv6) as well as via the `rxrpc` code. The esp variant requires the privilege to create user namespaces and then allows for easy 4 byte writes at a time. It has been assigned -CVE-2026-43284. The rxrpc variant overwrites 8 bytes, but as these are crypted, +CVE-2026-43284. The rxrpc variant overwrites 8 bytes and doe not require the +namespace creation privileges, but as these bytes are crypted, the user needs to brute force them in order to achieve a controlled result. This variant was assigned CVE-2026-43500. -_Exploiting this vulnerability requires access to the system and the ability -to execute code there, thus the categorization as LPE, not RCE (remote -code execution)._ +_Exploiting these vulnerabilities requires access to the system and the ability +to execute code there, thus the categorization as Local Privilege Escalation (LPE), +not Remote Code Execution (RCE)._ ## Impact @@ -112,7 +113,7 @@ Linux distributors are currently underway to ship fixed kernels. Without a fixed kernel, a workaround is to place a file `copyfail.conf` in `/etc/modprobe.d/` with the contents: -```text +```shell # Temporary workaround for copy.fail CVE-2026-31431 install algif_aead /bin/false ``` @@ -129,7 +130,7 @@ manage to ship tested fixed kernels. A fully effective workaround is again to prevent loading the affected modules by placing another file `dirtyfrag.conf` in `/etc/modprobe.d/`: -```text +```shell # Temporary workaround for Dirty Frag CVE-2026-43284, CVE-2026-43500 # This breaks IPsec install esp4 /bin/false @@ -203,6 +204,13 @@ into a rescue image (if the cluster runs on OpenStack) to inject an ssh key or to use a tool like kubectl-node-shell with the appropriate privileges. +```bash +for node in $(kubectl get nodes | grep -v '^NAME' | awk '{print $1;}') do; + kubectl node_shell "$node" -- bash -c 'echo -e "# Temporarily disable algif_aead (copy.fail)\ninstall algif_aead /bin/false" > /etc/modprobe.d/disable-aead-copyfail.conf' + kubectl node_shell "$node" -- bash -c 'echo -e "# Temporarily disable esp4, esp6, rxrpc (Dirty Frag)\ninstall esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false" > /etc/modprobe.d/disable-esp46-rxrpc-dirtyfrag.conf' +done +``` + ## SCS Cloud users Customers of SCS IaaS clouds are responsible for their own VMs. For VMs From 5fb82df2ed0bd228182fc543849593b78dc092b3 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 12:55:00 +0200 Subject: [PATCH 03/12] Also add changelog Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 1 + 1 file changed, 1 insertion(+) diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md index ad26c2480d..d3cce189f6 100644 --- a/blog/2026-05-10-kernel-root-exploits.md +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -240,3 +240,4 @@ SCS security contact is [security@scs.community](mailto:security@scs.community), ## Version history - Initial Draft, v0.1, 2026-05-08, 17:15 CEST. +- kubectl node-shell instructions, v0.2, 2026-05-09, 12:45 CEST. From 8dab1d9ad83a8e31f1cd295af76e605ee1f79ab0 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 12:57:40 +0200 Subject: [PATCH 04/12] More on stable kernels. Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md index d3cce189f6..9252c5a696 100644 --- a/blog/2026-05-10-kernel-root-exploits.md +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -119,8 +119,9 @@ install algif_aead /bin/false ``` The fixes for Dirty Frag are still in development as of May 8. The first fixes -have been merged upstream and released in 7.0.5, 6.18.28, 6.12.87, 6.6.138 but -there is [more to come](https://lwn.net/ml/all/2026050859-ahead-anchovy-05e2@gregkh/). +have been merged upstream and released in 7.0.5, 6.18.28, 6.12.87, 6.6.138, +6.1.172, 5.15.206 and 5.10.255 but there is +[more to come for rxrpc](https://lwn.net/ml/all/2026050859-ahead-anchovy-05e2@gregkh/). The responsible disclosure process for Dirty Frag was unfortunately broken, so the upstream maintainers and the distributors this time did not have time to carefully prepare and test fixes ahead of the publication of the issue. From 80acdfc77d5ddf6f4019eca740159f3800eec849 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 13:35:04 +0200 Subject: [PATCH 05/12] Mention that we patched community infra. Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 26 +++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md index 9252c5a696..0802d23c50 100644 --- a/blog/2026-05-10-kernel-root-exploits.md +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -5,7 +5,7 @@ authors: [garloff] tags: [security, linux, cve, copy.fail, dirtyfrag] --- -## Linux root exploits (Local Privilege Escalation) copy.fail and Dirty Frag +## Linux root exploits (Local Privilege Escalation) Unix is designed as a multi-user system. Different users have their own files and processes and can work without interference from others. @@ -32,6 +32,8 @@ We don't normally report about these LPEs. They get fixed by the upstream Linux developers, shipped as stable updates by the maintainers and shipped to the end users via kernel updates from the Linux distributors. +## copy.fail and Dirty Frag + The currently highly visible Linux kernel issues [copy.fail](https://copy.fail/) and [Dirty Frag](https://github.com/V4bel/dirtyfrag) are both LPEs (local root vulnerabilities). The reason we report about them is that they both affect @@ -85,12 +87,12 @@ Any system where normal (non-root) users can log in to execute code under their own control is no longer secure: The users can use the publicly available exploits to gain root privileges and get access to whatever the (virtual) machine has access to. This means accessing other user's data as well as secrets -that are stored by the system administrator. +that may be stored by the system administrator. Such systems are less common these days than they were 20 years ago. The reason -is that virtualization has become a commodity, so individual users may use their -own virtual machine rather than having access to a shared (virtual) machine -in many scenarios. +is that virtualization has become a commodity, so in many scenarios, individual +users may use their own virtual machine rather than having access to a shared +(virtual) machine. Note that this vulnerability does NOT break the isolation of virtual machines. VMs remain as securely isolated as they would be without this vulnerability. @@ -176,12 +178,14 @@ and provide them short-term via their image service (glance). The default implementation with SCS Cluster Stacks is vulnerable; the current node images have a kernel that is affected by this weakness. This allows a user to break out of the containers running in the cluster to take over the node -VM and other containers. With Cluster-API and the SCS Cluster Stacks building +VM and other containers. + +With Cluster-API and the SCS Cluster Stacks building on them, creating, updating and removing Kubernetes clusters has become a commodity; it is thus normal to create clusters per development team and not share them. In this scenario, the break out may allow a developer to -take over containers from his team mates which is not a real danger in many -setups. For cluster setups across teams or worse for setups where several +take over containers from his team mates which may not constitute a real danger +in many setups. For cluster setups across teams or worse for setups where several clusters that belong to different entities share a control plane, this becomes more serious. @@ -225,6 +229,11 @@ images and then perform the rolling upgrade. If their use scenario puts them at increased risk, they are advised to prevent the module loading in the meantime, as advised above. +## SCS community infrastructure + +The SCS community infrastructure was secured on May 8 by disabling the +relevant modules. + ## Thanks The authors would like to thank Taeyang Lee at Xint (who initiated the @@ -242,3 +251,4 @@ SCS security contact is [security@scs.community](mailto:security@scs.community), - Initial Draft, v0.1, 2026-05-08, 17:15 CEST. - kubectl node-shell instructions, v0.2, 2026-05-09, 12:45 CEST. +- Mention succssful patching of community infra, v0.3, 2026-05-09, 13:30 CEST. From 7f71bd1a4971f94d878fd43c0f7a37fc4285d9c4 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 13:50:17 +0200 Subject: [PATCH 06/12] Use node v20. Signed-off-by: Kurt Garloff --- .nvmrc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.nvmrc b/.nvmrc index 0828ab7947..9a2a0e219c 100644 --- a/.nvmrc +++ b/.nvmrc @@ -1 +1 @@ -v18 \ No newline at end of file +v20 From 01702bfec565adb4ae261ba62118cd693a9f1d21 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 13:50:17 +0200 Subject: [PATCH 07/12] Use node v20. Signed-off-by: Kurt Garloff --- .nvmrc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.nvmrc b/.nvmrc index 0828ab7947..9a2a0e219c 100644 --- a/.nvmrc +++ b/.nvmrc @@ -1 +1 @@ -v18 \ No newline at end of file +v20 From b40bea4d4ae3d7125dbc9e63b31cff0dfb4e78c5 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 14:09:35 +0000 Subject: [PATCH 08/12] Fix the "here" links that are complained about. Signed-off-by: Kurt Garloff --- blog/2026-01-22-introducing-docs-blog.md | 4 ++-- community/cloud-resources/cloud-resources.md | 6 +++--- community/contribute/adding-docs-guide.md | 2 +- community/contribute/styleguides/ansible_styleguide.md | 2 +- community/tools/zuul.md | 6 +++--- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/blog/2026-01-22-introducing-docs-blog.md b/blog/2026-01-22-introducing-docs-blog.md index 7e21e22019..2df2e36e5e 100644 --- a/blog/2026-01-22-introducing-docs-blog.md +++ b/blog/2026-01-22-introducing-docs-blog.md @@ -41,8 +41,8 @@ it assembles documentation from various places. Unlike the Forum, it does not ne to prioritize neutrality as top priority, but benefits and prefers those projects that contribute useful content to it. -We have decided to use docusaurus' blog feature to publish blog articles -[here](https://docs.scs.community/blog/). We appreciate contributions. +We have decided to use docusaurus' blog feature to publish +[blog articles](https://docs.scs.community/blog/). We appreciate contributions. ## Old blog content diff --git a/community/cloud-resources/cloud-resources.md b/community/cloud-resources/cloud-resources.md index 0d12659ff9..77d37798e3 100644 --- a/community/cloud-resources/cloud-resources.md +++ b/community/cloud-resources/cloud-resources.md @@ -34,7 +34,7 @@ Once the PR has been accepted, [configure your VPN access by following the steps ### SCS Hardware Landscape Usage -More information on how to use the Hardware Landscape can be found [here](hardware-landscape.md). +Here is [more information on how to use the Hardware Landscape](hardware-landscape.md). ## SCS2 @ plusserver @@ -44,7 +44,7 @@ To apply for a new project, please create a pull request against this document ( ### SCS2 Usage -A brief guide on how to use the resources provided by plusserver GmbH can be found [here](plusserver-gx-scs.md). +Here is a [brief guide on how to use the resources provided by plusserver GmbH](plusserver-gx-scs.md). ### SCS2 Users @@ -101,7 +101,7 @@ To apply for a new project, please create a pull request against this document ( ### Wavestack Usage -A brief guide on how to use the resources provided by Wavecon GmbH can be found [here](wavestack.md). +Here is [a brief guide on how to use the resources provided by Wavecon GmbH](wavestack.md). ### Wavestack Service Users diff --git a/community/contribute/adding-docs-guide.md b/community/contribute/adding-docs-guide.md index 0dfe141991..7e0c0a4ef1 100644 --- a/community/contribute/adding-docs-guide.md +++ b/community/contribute/adding-docs-guide.md @@ -53,7 +53,7 @@ File a Pull Request within the [docs](https://github.com/SovereignCloudStack/doc Once it is approved and merged, a postinstall script will be triggered within the build process. This initiates downloading, copy and distilling which results in this static generated [documentation](https://docs.scs.community) page – now with your content. -An explanation on how the sync & distill workflow and a guide on how to test it in a local development environment you will find [here](https://github.com/SovereignCloudStack/docs/blob/main/community/contribute/docs-workflow-explanation.md). +Here is an explanation on how the [sync & distill workflow works and a guide on how to test it in a local development environment](https://github.com/SovereignCloudStack/docs/blob/main/community/contribute/docs-workflow-explanation.md). ## 2. Operational documentation diff --git a/community/contribute/styleguides/ansible_styleguide.md b/community/contribute/styleguides/ansible_styleguide.md index 060be3851e..74bb4e4d46 100644 --- a/community/contribute/styleguides/ansible_styleguide.md +++ b/community/contribute/styleguides/ansible_styleguide.md @@ -15,7 +15,7 @@ disable the package_latest rule. ## Key Order -To check the key order we use our own rule. This can be found [here](https://github.com/osism/zuul-jobs/tree/main/roles/ansible-lint/files). +To check the key order we use [our own rule](https://github.com/osism/zuul-jobs/tree/main/roles/ansible-lint/files). ### Positioning and use of the become directive diff --git a/community/tools/zuul.md b/community/tools/zuul.md index f2abba96c5..93d289af88 100644 --- a/community/tools/zuul.md +++ b/community/tools/zuul.md @@ -10,14 +10,14 @@ use Zuul as our main pipeline solution. - Make Zuul aware of your repository in this [repo](https://github.com/SovereignCloudStack/zuul_deployment) - Create a file _.zuul.yaml_ - - An example can be found [here](https://github.com/SovereignCloudStack/zuul-sandbox/blob/main/.zuul.yaml) + - Here is [an example](https://github.com/SovereignCloudStack/zuul-sandbox/blob/main/.zuul.yaml) - You can have a job section containing _self-defined_ jobs which you need to write on your own - You have to have a project section containing - the default-branch name - the merge-mode which should be used to auto-merge - the jobs to run in each pipeline (gh_check, gh_gate, gh_post, gh_tag) - - these pipelines are triggered by events which can be looked up [here](https://github.com/SovereignCloudStack/zuul_config/blob/main/zuul.d/gh_pipelines.yaml) - - some default jobs can be found [here](https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks) + - these [pipelines](https://github.com/SovereignCloudStack/zuul_config/blob/main/zuul.d/gh_pipelines.yaml) are triggered by events + - ihere are [some default jobs](https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks) - If you have _self-defined_ jobs, you need to create a folder _.playbooks_ - this folder containers ansible playbooks which will be triggered From 1b87d6166a7c9158aec7084b64df056a88a3ec53 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 14:43:31 +0000 Subject: [PATCH 09/12] Fix links. Signed-off-by: Kurt Garloff --- blog/2026-01-22-introducing-docs-blog.md | 2 +- community/contribute/adding-docs-guide.md | 2 +- community/contribute/styleguides/ansible_styleguide.md | 2 +- community/tools/zuul.md | 8 +++++--- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/blog/2026-01-22-introducing-docs-blog.md b/blog/2026-01-22-introducing-docs-blog.md index 2df2e36e5e..ede9ae77dc 100644 --- a/blog/2026-01-22-introducing-docs-blog.md +++ b/blog/2026-01-22-introducing-docs-blog.md @@ -17,7 +17,7 @@ it was somewhat hard to distill the various aspects and goals of SCS though. With the end of the funded project, we split the activities into different organizations with distinct goals: -1. The [Forum SCS Standards](https://sovereigncloudstack.org/en/about-scs/network/) is +1. The [Forum SCS Standards](https://sovereigncloudstack.org/en/about-scs/) is responsible for governing the standardization process. While it pulls significant input from the various software projects that belong to the SCS ecosystem, it is neutral towards them beyond the preference for standards compliance. This reflects that there can and diff --git a/community/contribute/adding-docs-guide.md b/community/contribute/adding-docs-guide.md index 7e0c0a4ef1..93c6f69332 100644 --- a/community/contribute/adding-docs-guide.md +++ b/community/contribute/adding-docs-guide.md @@ -10,7 +10,7 @@ Determine the type of your documentation and click to continue. 2. [Operational documentation](#2-operational-documentation) 3. [Community documentation](#3-community-documentation) -If unsure don't hestitate to ask us at [Matrix](https://github.com/SovereignCloudStack/docs/blob/main/community/communication/matrix.md) +If unsure don't hestitate to ask us at [Matrix](https://docs.scs.community/community/tools/matrix) ## 1. Technical Documentation diff --git a/community/contribute/styleguides/ansible_styleguide.md b/community/contribute/styleguides/ansible_styleguide.md index 74bb4e4d46..9730ea8385 100644 --- a/community/contribute/styleguides/ansible_styleguide.md +++ b/community/contribute/styleguides/ansible_styleguide.md @@ -15,7 +15,7 @@ disable the package_latest rule. ## Key Order -To check the key order we use [our own rule](https://github.com/osism/zuul-jobs/tree/main/roles/ansible-lint/files). +To check the key order we use our [own rule](https://github.com/osism/zuul-jobs/tree/main/roles/ansible-lint/). ### Positioning and use of the become directive diff --git a/community/tools/zuul.md b/community/tools/zuul.md index 93d289af88..db7e2adfe5 100644 --- a/community/tools/zuul.md +++ b/community/tools/zuul.md @@ -8,15 +8,17 @@ use Zuul as our main pipeline solution. ### How to make a repo use Zuul -- Make Zuul aware of your repository in this [repo](https://github.com/SovereignCloudStack/zuul_deployment) +Note: This needs to be updated! + +- Make Zuul aware of your repository in this [repo](https://github.com/SovereignCloudStack/zuul-scs-jobs) - Create a file _.zuul.yaml_ - - Here is [an example](https://github.com/SovereignCloudStack/zuul-sandbox/blob/main/.zuul.yaml) + - Here is [an example](https://github.com/SovereignCloudStack/zuul-config/) - You can have a job section containing _self-defined_ jobs which you need to write on your own - You have to have a project section containing - the default-branch name - the merge-mode which should be used to auto-merge - the jobs to run in each pipeline (gh_check, gh_gate, gh_post, gh_tag) - - these [pipelines](https://github.com/SovereignCloudStack/zuul_config/blob/main/zuul.d/gh_pipelines.yaml) are triggered by events + - these [pipelines](https://github.com/SovereignCloudStack/zuul-config/blob/main/zuul.d/) are triggered by events - ihere are [some default jobs](https://opendev.org/zuul/zuul-jobs/src/branch/master/playbooks) - If you have _self-defined_ jobs, you need to create a folder _.playbooks_ - this folder containers ansible playbooks which will be triggered From 8224caf313dd9e5523a543702a13957315bbc9e1 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 14:49:14 +0000 Subject: [PATCH 10/12] One more link. Signed-off-by: Kurt Garloff --- community/contribute/adding-docs-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/community/contribute/adding-docs-guide.md b/community/contribute/adding-docs-guide.md index 93c6f69332..92cfac9025 100644 --- a/community/contribute/adding-docs-guide.md +++ b/community/contribute/adding-docs-guide.md @@ -68,6 +68,6 @@ File a Pull Request within the [docs](https://github.com/SovereignCloudStack/doc ## 3. Community documentation -Your doc files contain knowledge regarding our community? Choose the right directory. If unsure don't hestitate to ask us at [Matrix](https://github.com/SovereignCloudStack/docs/blob/main/community/communication/matrix.md). +Your doc files contain knowledge regarding our community? Choose the right directory. If unsure don't hestitate to ask us at [Matrix](https://docs.scs.community/community/tools/matrix). File a Pull Request within the `docs` repository and add your markdown files to the fitting directory. From 991241cac41ea0f2910a1f7101144820b35156e2 Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 19:45:33 +0200 Subject: [PATCH 11/12] Update blog/2026-05-10-kernel-root-exploits.md Co-authored-by: Felix Kronlage-Dammers Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md index 0802d23c50..867bcd73fa 100644 --- a/blog/2026-05-10-kernel-root-exploits.md +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -245,7 +245,7 @@ getting fixes out. ## Sovereign Cloud Stack Security Contact SCS security contact is [security@scs.community](mailto:security@scs.community), as published on -[https://scs.community/.well-known/security.txt](https://scs.community/.well-known/security.txt). +[https://sovereigncloudstack.org/.well-known/security.txt](https://scs.community/.well-known/security.txt). ## Version history From 0798988fc9cc4b1cbf740adec30dc1aabc0929fc Mon Sep 17 00:00:00 2001 From: Kurt Garloff Date: Sat, 9 May 2026 17:49:37 +0000 Subject: [PATCH 12/12] Release. And fix info on the disclosure process. Signed-off-by: Kurt Garloff --- blog/2026-05-10-kernel-root-exploits.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/blog/2026-05-10-kernel-root-exploits.md b/blog/2026-05-10-kernel-root-exploits.md index 867bcd73fa..28716e09ae 100644 --- a/blog/2026-05-10-kernel-root-exploits.md +++ b/blog/2026-05-10-kernel-root-exploits.md @@ -124,7 +124,8 @@ The fixes for Dirty Frag are still in development as of May 8. The first fixes have been merged upstream and released in 7.0.5, 6.18.28, 6.12.87, 6.6.138, 6.1.172, 5.15.206 and 5.10.255 but there is [more to come for rxrpc](https://lwn.net/ml/all/2026050859-ahead-anchovy-05e2@gregkh/). -The responsible disclosure process for Dirty Frag was unfortunately broken, +The responsible disclosure process for Dirty Frag unfortunately failed due to the +[patches being spotted](https://www.openwall.com/lists/oss-security/2026/05/07/12), so the upstream maintainers and the distributors this time did not have time to carefully prepare and test fixes ahead of the publication of the issue. So we have to expect that it will take a few days until all Linux distributor @@ -252,3 +253,4 @@ SCS security contact is [security@scs.community](mailto:security@scs.community), - Initial Draft, v0.1, 2026-05-08, 17:15 CEST. - kubectl node-shell instructions, v0.2, 2026-05-09, 12:45 CEST. - Mention succssful patching of community infra, v0.3, 2026-05-09, 13:30 CEST. +- Correct facts on the failure of the responsible disclosure. Release as v1.0, 2026-05-09, 20:00 CEST.