Skip to content

GPU offloading preconditioner#4953

Draft
dsroberts wants to merge 15 commits intofiredrakeproject:mainfrom
dsroberts:dsroberts/offload-pc
Draft

GPU offloading preconditioner#4953
dsroberts wants to merge 15 commits intofiredrakeproject:mainfrom
dsroberts:dsroberts/offload-pc

Conversation

@dsroberts
Copy link
Contributor

Description

First pass at firedrake-configure for GPU-enabled PETSc builds, as well as a corresponding github actions workflow. An optional --gpu-arch flag has been added which adds all the necessary configuration to go from a fresh Ubuntu installation to a working Firedrake build (mostly) following existing build instructions. OffloadPC itself is more or less unchanged from @Olender's work for now, but with a few extra checks for whether GPU offload is available. I expect it will take a few iterations to get this going under CI, so once that's up and running reliably we can look at further development to the OffloadPC functionality and testing.

I've tried to put this together in a way that can be expanded upon fairly easily. We're interested in ROCm/HIP as well as Cuda, so there are a couple of dictionaries that just have a cuda key for now that will be expanded upon in time. I also thought that OffloadPC should be a no-op with a warning if there are any issues around GPUs, rather than crashing out. I think this is the right choice for workflow portability and heterogeneous systems, but it would be just as easy to raise exceptions there.

First pass at firedrake-configure for GPU-enabled PETSc builds, as well
as a github actions test. OffloadPC is more or less unchanged from
olender's work, but with a few extra checks for whether GPU offload is
available. No dedicated GPU tests yet. CUDA only so far.
Have offload preconditioner warn instead of crash if a GPU could not be
initialised for whatever reason. Remove --with-cuda-arch flag from PETSc
config in firedrake-configure.
Comment on lines +86 to +87
y_cu = PETSc.Vec() # begin
y_cu.createCUDAWithArrays(y)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
y_cu = PETSc.Vec() # begin
y_cu.createCUDAWithArrays(y)
y_cu = PETSc.Vec().createCUDAWithArrays(y)

Comment on lines +88 to +90
x_cu = PETSc.Vec()
# Passing a vec into another vec doesnt work because original is locked
x_cu.createCUDAWithArrays(x.array_r)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
x_cu = PETSc.Vec()
# Passing a vec into another vec doesnt work because original is locked
x_cu.createCUDAWithArrays(x.array_r)
# Passing a vec into another vec doesnt work because original is locked
x_cu = PETSc.Vec().createCUDAWithArrays(x.array_r)

with PETSc.Log.Event("Event: solve"):
self.pc.apply(x_cu, y_cu)
# Calling data to synchronize vector
tmp = y_cu.array_r # noqa: F841
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
tmp = y_cu.array_r # noqa: F841
y_cu.array_r


def initialize(self, pc):
# Check if our PETSc installation is GPU enabled
super().initialize(pc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be calling AssembledPC.initialize()? It seems to me that this will trigger reassembly and we are not really making use of it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm working my way through understanding how PETSc manages GPU devices, and it does appear that when I drop super().initialize(pc) I get:

[0] MatConvert() at /g/data/fp50/admin/gpu-testing/apps/petsc/3.24.4/src/mat/interface/matrix.c:4390
[0] Object is in wrong state
[0] Not for unassembled matrix

So the matrix does have to be assembled, but the previous PC might have performed assembly, so it wouldn't be necessary in that case. I think this might be an argument in favour of having device offload as an optional step in AssembledPC?

return mat_type


class OffloadPC(AssembledPC):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the name be more precise and refer to "GPU"?

P_cu.setNearNullSpace(P.getNearNullSpace())

# Update preconditioner with GPU matrix
self.pc.setOperators(A, P_cu)
Copy link
Contributor

@pbrubeck pbrubeck Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P lives in the GPU but A still lives in the CPU. When pc_type = ksp, the more sensible thing would be to have both A and P in the GPU, don't you agree?

Copy link
Contributor

@pbrubeck pbrubeck Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this because A cannot live in the GPU if it is matfree? Do we benefit much from A being matfree if P is not matfree?

Copy link
Contributor Author

@dsroberts dsroberts Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I genuinely don't know. On the surface it makes sense, but are there cases in which A or P can't be offloaded? Do we need do an if not isinstance(A,ImplicitMatrix): before attempting to offload?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We certainly want PETSc options to give the user finer control over what to offload.

If the original A and P point to the same Mat instance then it is reasonable to offload them both (as the same instance). This could be the default.

nested_parameters = {
"pc_type": "ksp",
"ksp": {
"ksp_type": ksp_type,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ksp_max_it should be capped, to ensure that we are not taking an insane number of iterations.

L = inner(grad(u), grad(v)) * dx
R = inner(v, f) * dx

# Dirichlet boundary on all sides to 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To catch potential issues with BCs, it'd be good to prescribe non-homogeneous BCs

@pbrubeck
Copy link
Contributor

Thanks for this, it looks really promising. I have some suggestions about the interface.

It seems to me that the current implementation might be abusing from AssembledPC, but I believe it'd be possible to add code to AssembledPC to convert the matrix and vectors back and forth if assembled_mat_type: "aijcusparse". Hopefully, having a single class makes the logic clearer to prevent undesired reassembly of the operators.

Once AssembledPC supports aijcusparse, we can implement a separate class as a shortcut, so users do not need to set the assembled_mat_type option.

Comment on lines +17 to +19
for device, mat_type in device_mat_type_map.items():
if device in petsctools.get_external_packages():
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like it should be a utility function. It's very opaque.

run: |
curl -fsSL $( python3 ./firedrake-repo/scripts/firedrake-configure --gpu-arch cuda --show-extra-repo-key ) | apt-key add -
python3 ./firedrake-repo/scripts/firedrake-configure --gpu-arch cuda --show-extra-repo-file-contents > \
/etc/apt/sources.list.d/$( python3 ./firedrake-repo/scripts/firedrake-configure --gpu-arch cuda --show-extra-repo-file-name )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's kinda horrid that we have to invoke firedrake-configure 3 times. Is there a nicer way to do this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, its disgusting. I was following the procedure nvidia uses to build their Docker containers. I'll see if I can adapt the procedure they document for end-users, and hope its similar enough to AMD's procedure for HIP.

CUDA: (
"cuda.list",
f"deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/{CUDA_ARCH_MAP.get(platform.machine(), platform.machine())} /",
f"https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/{CUDA_ARCH_MAP.get(platform.machine(), platform.machine())}/3bf863cc.pub",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This includes the magic string '3bf863cc'. Can there be a comment saying how we can update this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am very uncomfortable with this. Unless I have misunderstood what is going on this is potentially introducing a security risk for both us and our users.

Should we really be telling our users (many of whom will blindly follow installation guide) to add random authentication keys to their package managers? And even if we are happy to do that, should we be doing it through a deprecated utility that will be removed after Ubuntu24.04 (according to the apt-key man page)?

Even if we can make sure that this specific repo at this specific time is safe, are we confident we can reliably keep the links up to date, and if we allow one then inevitably it's only a matter of time before someone adds another. Can we keep up to date with that too?

AFAIK we currently only make users install packages that are already available through their package manager (so the security responsibility is predominantly with them), or through PETSc, which is a much bigger library with more developer and maintainer resources.

I think we need to discuss this at the meeting and work out a) what size of risk it really is, and b) if it is a risk, the best way around this if offloading really does need libraries that can't be installed via package manager or PETSc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above, will rework this part.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't be silently adding software sources behind users' backs. This seems bad. Potentially we should just fail and tell users to go do it themselves.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

firedrake-configure doesn't really 'do' anything here. Like the rest of the firedrake-configure steps It just prints a string to give to another command. A user still has to run curl and then dpkg to install it. Once this is in, I figured it would be added as an optional step to the installation instructions. If a user skips that step and then attempts to install system packages with --gpu-arch cuda, that would fail as apt would not be able to find any packages from the Nvidia repository.

(LINUX_APT_AARCH64, ARCH_COMPLEX): LINUX_APT_PACKAGES,
(MACOS_HOMEBREW_ARM64, ARCH_DEFAULT): MACOS_HOMEBREW_PACKAGES,
(MACOS_HOMEBREW_ARM64, ARCH_COMPLEX): MACOS_HOMEBREW_PACKAGES,
(LINUX_APT_X86_64, ARCH_DEFAULT, NO_GPU): LINUX_APT_PACKAGES,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd prefer it if packages were purely additive. So add libsuperlu-dev etc here instead of filtering them out above.

PetscSpecsDeltaDictType = dict[GPUArch, PetscSpecsDictType]

# Suitesparse and SuperLU-DIST must be built from source to enable GPU features
PETSC_EXTERNAL_PACKAGE_SPECS_DELTA_GPU: PetscSpecsDeltaDictType = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about an additive approach. I want firedrake-configure to be as dumb as possible.

Comment on lines +518 to +525
# Use a different mirror to fetch apt packages from to get around
# temporary outage.
# (https://askubuntu.com/questions/1549622/problem-with-archive-ubuntu-com-most-of-the-servers-are-not-responding)
# The mirror was chosen from https://launchpad.net/ubuntu/+archivemirrors.
- name: Configure apt
run: |
sed -i 's|http://archive.ubuntu.com/ubuntu|http://www.mirrorservice.org/sites/archive.ubuntu.com/ubuntu/|g' /etc/apt/sources.list.d/ubuntu.sources
apt-get update
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@connorjward can't we get rid of this now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect so

@dsroberts dsroberts marked this pull request as draft March 10, 2026 22:56
@dsroberts
Copy link
Contributor Author

I should have marked this as a draft from the start, apologies for missing that. @pbrubeck thanks for the detailed feedback. I like your suggested approach of having AssembledPC perform the copy to device memory if you've asked for it. In my mind, that's more intuitive than having a separate 'preconditioner'. If we could do it in a way that would add a param to AssembledPC, we could then have the user decide what to do if a GPU is unavailable rather than guessing as we are now. E.g.

    parameters = {
        "ksp_type": "preonly",
        "pc_type": "python",
        "pc_python_type": "firedrake.AssembledPC",
        ...
        "assembled_pc_offload": "try"
    }

Where offload can take the values of never (i.e. do not offload, even if available - default setting), try (attempt to offload continue but warn if not possible) or always (attempt to offload and raise an exception if it fails). I'd let the implementation pick the device matrix type, as I want to expand this to AMD devices as well (mat_type=aijhipsparse) - we have hardware available to test this. Anyway, that all depends on getting the build and test procedure working. There is still much to do there.

Addresses review comments on firedrake-configure, core.yml and
test_poisson_offloading_pc.py. Updates to firedrake-check and skipnogpu
marker still to come.
@dsroberts dsroberts added the gpu For special runner label Mar 11, 2026
Move device matrix type selection to utils.py. Introduce skipnogpu test
marker.
@dsroberts
Copy link
Contributor Author

The issue in the CI seems to indicate that the host that the GPU tests attempted to run on did not have Nvidia drivers installed. Is it possible there are runners with the gpu tag that don't have GPUs, or drivers are missing or something?

@connorjward
Copy link
Contributor

The issue in the CI seems to indicate that the host that the GPU tests attempted to run on did not have Nvidia drivers installed. Is it possible there are runners with the gpu tag that don't have GPUs, or drivers are missing or something?

I've passed this on so it should get investigated and fixed shortly.

@Olender
Copy link
Contributor

Olender commented Mar 12, 2026

The issue in the CI seems to indicate that the host that the GPU tests attempted to run on did not have Nvidia drivers installed. Is it possible there are runners with the gpu tag that don't have GPUs, or drivers are missing or something?

Have you tried setting up PETSc inside the Nvidia-provided Cuda Docker image locally and running the tests, instead of on your local PC installation, just to check? I attempted this before (at the time it was PETSc installed inside a clean cuda:12.9 Ubuntu Docker image from Nvidia, though I can check my Dockerfile to confirm if needed), and I couldn’t get PETSc to work properly in that setup.

When passing the --download-mpi flag to PETSc and then checking ompi_info, I saw the following MPI extensions: affinity, cuda, ftmpi, rocm, shortfloat. It seemed odd that both rocm and cuda appeared, especially since this was inside the CUDA container with no other prior installations.

Could you check whether you’re seeing the same extensions on your side?

Note: this was on a older PETSc version

@dsroberts
Copy link
Contributor Author

Thanks for the info @Olender. My testing setup is a little more complicated by necessity as I don't have access to a system that I can both run Docker on and that has Nvidia hardware. I'm testing the build set-up from the base Ubuntu image in Docker as in the longer term we're trying to augment the current installation instructions to include GPU builds. For that reason, I'm cross-compiling, so I download the full CUDA software stack into the container (stuck at 12.9 until this PETSc issue is resolved) and build PETSc, though I can't run make check due to lack of hardware. In the mean time, for actual code development and testing I'm using a bare-metal build on an HPC system we have access to.
In modifying firedrake-configure, I'm hoping that we can stick with apt-provided OpenMPI, as this minimises the differences between the GPU and non-GPU installation steps. It won't be 'GPU-aware', though I can't imagine a lot of people installing Firedrake locally need GPU-direct RDMA. If that doesn't work, I'd probably want to bring in a pre-configured OpenMPI via. HPC-X rather than have PETSc build it. That would be a last resort if neither of the previous options work. On the HPC system, the sysadmins build OpenMPI with around 60 flags passed to ./configure - I trust that it is configured exactly as it needs to be for that system and there is no way that a PETSc-built OpenMPI will match it in terms of stability and performance.

@dsroberts dsroberts force-pushed the dsroberts/offload-pc branch from 013d6f0 to 4e8ca5b Compare March 13, 2026 00:09
Copy link
Contributor

@connorjward connorjward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is getting better. The CI error now appears to be a more mundane missing apt-get update.

@dsroberts dsroberts force-pushed the dsroberts/offload-pc branch from 47c4cdc to f2af668 Compare March 15, 2026 23:29
@dsroberts dsroberts force-pushed the dsroberts/offload-pc branch from f2cee66 to 3f3a96a Compare March 16, 2026 00:25
@dsroberts dsroberts force-pushed the dsroberts/offload-pc branch 2 times, most recently from 33bf483 to 04d12d9 Compare March 16, 2026 01:57
@dsroberts
Copy link
Contributor Author

I think this is getting better. The CI error now appears to be a more mundane missing apt-get update.

Yes, its looking good now. After a few iterations we have a working build. Now I can work on integrating a GPU test into firedrake-check.

@dsroberts dsroberts force-pushed the dsroberts/offload-pc branch from 69c58e7 to decca0a Compare March 16, 2026 06:58
@dsroberts dsroberts force-pushed the dsroberts/offload-pc branch from decca0a to 9ea7a8a Compare March 16, 2026 07:35
Copy link
Contributor

@connorjward connorjward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking pretty good. Please let us know when you think this is ready for a detailed review.

mat_type = device_matrix_type()
if mat_type is None:
if pc_comm_rank == 0:
warnings.warn(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Python warnings there is a difference between warnings.warn and logger.warning. The former indicates a genuine problem and the latter indicates a likely problem. I wonder if the first warning here should be a warnings.warn and the second a logger.warning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I wasn't aware of the distinction. Ironic given that if you set logging.captureWarnings = True, the function called by warnings.warn to show the message is monkey-patched to a function that calls logging.warning. Will update accordingly

def test_poisson_offload(ksp_type, pc_type):

# Different tests for poisson: cg and pctype sor, --ksp_type=cg --pc_type=gamg
print(f"Using ksp_type = {ksp_type}, and pc_type = {pc_type}.", flush=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't leave print statements in tests (but fine for debugging)

def test_poisson_offload(ksp_type, pc_type):

# Different tests for poisson: cg and pctype sor, --ksp_type=cg --pc_type=gamg
print(f"Using ksp_type = {ksp_type}, and pc_type = {pc_type}.", flush=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't leave print statements in tests (but fine for debugging)


- name: Add Nvidia CUDA deb repositories
run: |
deburl=$( python3 ./firedrake-repo/scripts/firedrake-configure --show-extra-repo-pkg-url --gpu-arch cuda )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This process definitely seems better.

run: |
. venv/bin/activate
export PETSC_OPTIONS="${PETSC_OPTIONS} -log_view_gpu_time -log_view"
python3 ./firedrake-repo/tests/firedrake/offload/test_poisson_offloading_pc.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this doing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a final sanity check on my end, I wanted to confirm that the GPU was being used as expected. The final column of the profiling output (https://github.com/firedrakeproject/firedrake/actions/runs/23132801814/job/67189609416) shows the fraction of work done on the GPU, I wanted to make sure that the low-level vector and matrix operations were actually happening on the GPU, which they are. Besides, this change is in a 'DROP BEFORE MERGE' commit, it will be going away.

@@ -0,0 +1,3 @@
self-hosted-runner:
labels:
- gpu No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this do?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's to address this linting error: https://github.com/firedrakeproject/firedrake/actions/runs/22885137596/job/66433010084. I will add a comment to the file to that effect.

@dsroberts
Copy link
Contributor Author

Looking pretty good. Please let us know when you think this is ready for a detailed review.

Thanks @connorjward. Before I dive into the implementation, I'd like some feedback on @pbrubeck's suggestion of performing the offload step as part of AssembledPC rather than having it as its own step. See my earlier comment for my thoughts on that idea. Depending on the outcome of that feedback, the PR could end up looking quite different than it does now.

@pbrubeck
Copy link
Contributor

Where offload can take the values of never (i.e. do not offload, even if available - default setting), try (attempt to offload continue but warn if not possible) or always (attempt to offload and raise an exception if it fails). I'd let the implementation pick the device matrix type, as I want to expand this to AMD devices as well (mat_type=aijhipsparse) - we have hardware available to test this. Anyway, that all depends on getting the build and test procedure working. There is still much to do there.

The try option doesn't seem reasonable to me. As Yoda once said, "do or do not, there is no try". Most Petsc options are quite rigid/explicit, and this sort of behavior is not found anywhere else. If the requested solver is not available, the user should be notified by an appropriate error message. Most users will always ignore warnings, and if they request to use a GPU solver and this is not available they should get a loud error message.

I'm starting to think that this OffloadPC should not be subclassing or extending AssembledPC. I originally suggested subclassing it because I saw a lot of code repetition. But now I realized that AssembledPC always reassembles the Jacobian, and OffloadPC, in contrast, only wants to convert the mat_type (simply transferring the matrix entries without recalculating them). So I am more convinced that OffloadPC should be its own class, that would simply convert the Mat objects and stuff them into an internal PC object.

Regarding the automatic selection of the mat_type based on the device, what if a device supports more than one? Which one should be the default?

@dsroberts
Copy link
Contributor Author

@pbrubeck thanks for the feedback. I've been experimenting with implementations over the last couple of days and I think I'm close to the same conclusion. If offloading is integrated into AssembledPC, how do we handle it for cases that don't involve AssembledPC? I'm thinking a separate, distinct, copy-in copy-out 'PC' is the most sensible way to go for now. I did have some success with offloading in AssembledPC (really just assembling to an aijcusparse matrix) but I had issues with the copy-back for the fieldsplit preconditioner.

The reason I suggested a try option is that, once this is all merged in, we will only be installing GPU-enabled Firedrake on our HPC systems from now on. A GPU-enabled PETSc build should be fully functional on a CPU-only host. The try option was proposed as a way to make solver configurations portable between CPU-only nodes and GPU-nodes of heterogeneous systems. Without it, people would need to maintain two separate sets of solver settings, one for CPU and one for GPU. I guess your proposal would be more consistent with PETSc, maybe I'll just patch our Firedrake installs to handle that.

As for multiple different offload types being supported by the same device, both HIP and CUDA are vendor-specific languages, AMD devices only support HIP, Nvidia devices only support CUDA. That being said, I believe it is possible to build PETSc with support for multiple devices. Perhaps the offload_mat_type function should initialise a device and query what it gets back to determine what matrix type it should use.

@dsroberts
Copy link
Contributor Author

An update on where I'm at with this. I have a reworked version subclassing PCBase that passes all of its tests, but as soon as I try to apply it to the G-ADOPT smoke test it falls apart, I'm in the process of trying to work out what the actual issue is there. Separate to that, there are a lot of unnecessary and expensive malloc/free calls happening in the current implementation - I need to work out if I can have PETSc leave the Cuda host/device buffers allocated in between apply invocations. Early profiling shows this would give a 10-20% speed up.
I'm wondering if its worth splitting this PR into two, the first of which would just cover the modifications to firedrake-configure and the github workflows (which is ready for review), and the second would contain the offloading preconditioner and test cases. The reason I suggest this is that the longer the changes in firedrake-configure and the github actions waiting on the offloading preconditioner are likely it is that there are going to be conflicts to deal with, whereas a new preconditioner is more or less stand-alone. The catch is that the GPU testing would be limited to PETSc make check (which does include Cuda tests) and CPU-only firedrake-check tests, the GPU offloading tests would come as a part of the second PR. Let me know if you'd like me to proceed with this, or continue with a single PR encompassing all of the work.

@connorjward
Copy link
Contributor

I think it's fine if you want to break this apart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

gpu For special runner

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants