Skip to content

PCPatch: support exterior facet integrals#4878

Open
miguelcoolchips wants to merge 11 commits intofiredrakeproject:mainfrom
Corintis:patch_pc_external_integrals
Open

PCPatch: support exterior facet integrals#4878
miguelcoolchips wants to merge 11 commits intofiredrakeproject:mainfrom
Corintis:patch_pc_external_integrals

Conversation

@miguelcoolchips
Copy link

@miguelcoolchips miguelcoolchips commented Feb 10, 2026

This PR was generated by AI and it depends on this PETSc MR: https://gitlab.com/petsc/petsc/-/merge_requests/9024 (merged)

Summary

PatchPC currently supports cell integrals (dx) and interior facet integrals (dS), but raises NotImplementedError when a form contains exterior facet integrals (ds). This PR extends PatchPC to handle exterior_facet integral types, enabling problems that involve boundary integrals -- such as DG formulations with boundary flux terms or Nitsche weak boundary conditions -- to be solved with patch-based preconditioners.

Motivation

Many finite element formulations naturally produce exterior facet integrals. For example:

  • DG methods with boundary penalty/flux terms (ds)
  • Nitsche's method for weakly imposed Dirichlet boundary conditions, which replaces DirichletBC objects with consistency, symmetry, and penalty terms integrated over ds
  • Mixed formulations combining dx, dS, and ds integrals
    Previously, any such form passed to PatchPC would fail with:
NotImplementedError: Only for cell or interior facet integrals

This blocked the use of PatchPC (including Vanka relaxation in multigrid) for an important class of problems.

Changes

firedrake/preconditioners/patch.py

  • matrix_funptr and residual_funptr: Extended to accept "exterior_facet" as a valid integral type. Exterior facet kernels are accumulated into a new ext_facet_kernels list and returned alongside cell and interior facet kernels. When the integral type is "exterior_facet", the exterior facet local numbering dat (mesh.exterior_facets.local_facet_dat) is appended to the kernel arguments.
  • make_c_arguments: Extended the require_facet_number logic to handle "exterior_facet" integrals, using mesh.exterior_facets.local_facet_dat._kernel_args_ and mesh.exterior_facets.point2facetnumber.
  • PatchBase.initialize: Cell, interior facet, and exterior facet kernel setup is now conditional on the respective kernel lists being non-empty, rather than assuming a cell kernel always exists. For exterior facet kernels, the method builds the C wrapper, compiles the callback function, constructs the ctypes struct (with point2facet from mesh.exterior_facets.point2facetnumber), and registers the callback via set_patch_jacobian(..., exterior_facets=True) and set_patch_residual(..., exterior_facets=True).

firedrake/cython/patchimpl.pyx

  • Added exterior_facets=False parameter to set_patch_residual and set_patch_jacobian.
  • Added elif exterior_facets: branches that call the new PETSc API functions PCPatchSetComputeOperatorExteriorFacets and PCPatchSetComputeFunctionExteriorFacets.

firedrake/cython/petschdr.pxi

  • Declared the two new PETSc C functions: PCPatchSetComputeOperatorExteriorFacets and PCPatchSetComputeFunctionExteriorFacets.

tests/firedrake/regression/test_patch_pc.py

  • Added test_patch_pc_exterior_facets, parametrized over "dx_and_ds" and "dx_dS_and_ds" integral combinations. Each case solves a DG(1) problem on a UnitSquareMesh with both a direct LU solver and PatchPC (star patches, vertex-centered), then asserts the solutions match to within atol=1e-8.

Dependencies

This PR requires a corresponding PETSc change that adds:

  • PCPatchSetComputeOperatorExteriorFacets and PCPatchSetComputeFunctionExteriorFacets API functions
  • extFacetsToPatchCell index set in the PC_PATCH struct for mapping exterior facets to their owning cell within each patch
  • Exterior facet contributions to the patch operator/residual assembly and matrix preallocation logic

Test plan

  • test_patch_pc_exterior_facets[dx_and_ds] -- DG problem with dx + ds integrals, PatchPC matches direct solver
  • test_patch_pc_exterior_facets[dx_dS_and_ds] -- DG problem with dx + dS + ds integrals, PatchPC matches direct solver
  • Existing test_jacobi_sor_equivalence tests continue to pass (cell-only and interior facet paths unchanged)
  • Stokes driven-cavity with Nitsche BCs and Vanka PatchPC relaxation in multigrid converges correctly

The following snippet shows the feature working on a Stokes problem with Nitsche boundary conditions

"""
Stokes driven-cavity solved with Nitsche's method for Dirichlet BCs
and PatchPC Vanka relaxation in a multigrid solver.

This example is based on the Firedrake Stokes Vanka demo:
  https://www.firedrakeproject.org/demos/stokes_vanka_patches.py.html

The key difference is that ALL Dirichlet boundary conditions are imposed
weakly via the symmetric Nitsche method, rather than using DirichletBC.
This introduces exterior facet integrals (ds) into the bilinear form,
exercising PatchPC's support for exterior facet assembly.

Problem: lid-driven cavity on the unit square.
  - u = (1, 0) on the top boundary (id 4)
  - u = (0, 0) on the remaining boundaries (ids 1, 2, 3)
  - Taylor-Hood elements (P2 velocity / P1 pressure)

Solver: geometric multigrid with Vanka patch relaxation (PatchPC).
"""

from firedrake import *


def run_solve(mesh, params):
    """Solve the Nitsche-Stokes problem and return the iteration count."""
    degree = 2
    V = VectorFunctionSpace(mesh, "CG", degree)
    Q = FunctionSpace(mesh, "CG", 1)
    Z = V * Q

    u, p = TrialFunctions(Z)
    v, q = TestFunctions(Z)

    n = FacetNormal(mesh)
    h = CellSize(mesh)
    beta = Constant(20.0 * degree**2)

    # --- Volume terms (standard Stokes) ---
    a = inner(grad(u), grad(v)) * dx - inner(p, div(v)) * dx - inner(div(u), q) * dx

    # --- Nitsche boundary terms on ds (all boundaries) ---
    a += (
        -inner(dot(grad(u), n), v) * ds  # consistency
        - inner(u, dot(grad(v), n)) * ds  # symmetry
        + inner(p * n, v) * ds  # pressure consistency
        + inner(q * n, u) * ds  # pressure symmetry
        + (beta / h) * inner(u, v) * ds  # penalty
    )

    # --- RHS: only non-zero on boundary 4 (lid) where g = (1, 0) ---
    g = Constant((1.0, 0.0))
    L = (
        -inner(g, dot(grad(v), n)) * ds(4)  # symmetry
        + inner(q * n, g) * ds(4)  # pressure symmetry
        + (beta / h) * inner(g, v) * ds(4)  # penalty
    )

    up = Function(Z)

    nsp = MixedVectorSpaceBasis(Z, [Z.sub(0), VectorSpaceBasis(constant=True, comm=Z.comm)])

    # No DirichletBC objects -- BCs are entirely in the form
    problem = LinearVariationalProblem(a, L, up)
    solver = LinearVariationalSolver(problem, solver_parameters=params, nullspace=nsp)
    solver.solve()

    return solver.snes.getLinearSolveIterations()


def mg_params(relax, mat_type="aij"):
    """Build multigrid parameters with given relaxation options."""
    return {
        "mat_type": mat_type,
        "ksp_type": "gmres",
        "ksp_max_it": 50,
        "pc_type": "mg",
        "mg_levels": {
            "ksp_type": "chebyshev",
            "ksp_max_it": 2,
            **relax,
        },
        "mg_coarse": {
            "mat_type": "aij",
            "ksp_type": "preonly",
            "pc_type": "cholesky",
            "pc_factor_shift_type": "nonzero",
        },
    }


# --- PatchPC Vanka relaxation (matfree, exercises ds assembly) ---
# Note: with Nitsche BCs (no strong DirichletBC), some Vanka patches may be
# singular (no velocity pinning on the patch boundary), so we use LU with
# a nonzero shift rather than the dense inverse.
patch_relax = mg_params(
    {
        "pc_type": "python",
        "pc_python_type": "firedrake.PatchPC",
        "patch": {
            "pc_patch_construct_type": "vanka",
            "pc_patch_construct_dim": 0,
            "pc_patch_exclude_subspaces": 1,
            "pc_patch_sub_mat_type": "seqaij",
            "sub_ksp_type": "preonly",
            "sub_pc_type": "lu",
            "sub_pc_factor_shift_type": "nonzero",
            "pc_patch_save_operators": True,
        },
    },
    mat_type="matfree",
)

# --- ASM Vanka relaxation (assembled matrix, for comparison) ---
asm_relax = mg_params(
    {
        "pc_type": "python",
        "pc_python_type": "firedrake.ASMVankaPC",
        "pc_vanka_construct_dim": 0,
        "pc_vanka_exclude_subspaces": 1,
        "pc_vanka_backend": "tinyasm",
    }
)


if __name__ == "__main__":
    dparams = {"overlap_type": (DistributedMeshOverlapType.VERTEX, 2)}
    base = UnitSquareMesh(4, 4, distribution_parameters=dparams)
    mh = MeshHierarchy(base, 4)

    names = {
        "Patch Vanka (matfree, Nitsche ds)": patch_relax,
        "ASM Vanka (assembled, Nitsche ds)": asm_relax,
    }

    for name, params in names.items():
        print(f"\n{name}")
        print(f"{'Level':<8}{'Iterations':<12}")
        for lvl, msh in enumerate(mh[1:], start=1):
            its = run_solve(msh, params)
            print(f"{lvl:<8}{its:<12}")
    print("\nAll solves completed successfully.")

Output

Patch Vanka (matfree, Nitsche ds)
Level   Iterations
1       7
2       9
3       11

ASM Vanka (assembled, Nitsche ds)
Level   Iterations
1       9
2       10
3       10

@pbrubeck pbrubeck changed the title external integrals PCPatch: support exterior facet integrals Feb 10, 2026
@JHopeCollins
Copy link
Member

A significant proportion of the changes in this PR are just reformatting. This makes it quite difficult to identify and review the actual changes.
Please can you remove the changes which are purely reformatting? The previous version was passing our linting so this shouldn't cause any problems with the CI.

@miguelcoolchips miguelcoolchips marked this pull request as draft February 19, 2026 16:00
@miguelcoolchips
Copy link
Author

A significant proportion of the changes in this PR are just reformatting. This makes it quite difficult to identify and review the actual changes. Please can you remove the changes which are purely reformatting? The previous version was passing our linting so this shouldn't cause any problems with the CI.

You are right. Do you know if Firedrake has formatting rules? I could not see such a thing in the pyproject.toml file

PS: I changed this PR to draft until the PETSc MR has been merged

@JHopeCollins
Copy link
Member

JHopeCollins commented Feb 19, 2026

You are right. Do you know if Firedrake has formatting rules? I could not see such a thing in the pyproject.toml file

Thanks! If you run make lint from the top level firedrake directory then it will pick up on our lint rules.

@pbrubeck
Copy link
Contributor

You are right. Do you know if Firedrake has formatting rules? I could not see such a thing in the pyproject.toml file

The formatting/linting is detailed in the contributing guide.

https://www.firedrakeproject.org/contribute.html#pre-submission-checklist

@miguelcoolchips miguelcoolchips marked this pull request as ready for review March 18, 2026 17:02
@miguelcoolchips
Copy link
Author

You are right. Do you know if Firedrake has formatting rules? I could not see such a thing in the pyproject.toml file

The formatting/linting is detailed in the contributing guide.

https://www.firedrakeproject.org/contribute.html#pre-submission-checklist

This PR has been formatted

# Solve with a direct solver for reference
u_direct = Function(V)
problem = LinearVariationalProblem(a, L, u_direct)
direct = LinearVariationalSolver(problem, solver_parameters={"ksp_type": "preonly", "pc_type": "lu"})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this test take ASMStarPC solver parameters and assert that both solvers take the same number of iterations?

miguelcoolchips and others added 3 commits March 19, 2026 14:23
Co-authored-by: Pablo Brubeck <brubeck@protonmail.com>
Co-authored-by: Pablo Brubeck <brubeck@protonmail.com>
…rectly

PCPatch passes all facets "exterior to the patch" (including patch-boundary
interior facets) to the exterior facet callback. The previous code mapped
these via point2facetnumber without bounds-checking, causing:
  - Wrong operator contributions from patch-boundary interior facets
    (mapped to garbage indices or -1 treated as valid)
  - Out-of-bounds access (SIGSEGV) in parallel where interior facet plex
    point indices exceed the size of point2facetnumber

Fixes:
1. Build an extended point2facetnumber array of size fEnd (covering all
   facet plex points), initialised to -1, then fill in valid exterior facet
   indices. This prevents OOB access in parallel.

2. In the C wrappers (ComputeResidual/ComputeJacobian), filter both the
   facet index array (whichPoints) and the corresponding DOF rows (dofArray)
   together, keeping only entries that map to genuine global exterior facets
   (point2facetnumber >= 0). Previously only whichPoints was filtered,
   misaligning it with dofArray.

Update test_patch_pc_exterior_facets to compare iteration counts against
ASMStarPC (as suggested by reviewer @pbrubeck), split into two functions:
- test_patch_pc_exterior_facets_dx_ds: parallel([1,3]), tests ds only
- test_patch_pc_exterior_facets_dx_dS_ds: serial only (parallel interior
  facet counts diverge between ASMStarPC/PatchPC, a pre-existing limitation)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
miguelcoolchips and others added 2 commits March 19, 2026 15:49
Build point2facetnumber with size fEnd (covering all facet plex points on
the rank) instead of max(facets)+1. This ensures that when PCPatch passes
patch-boundary interior facets to the exterior facet callback (as they are
"exterior to the patch"), they map to -1 rather than causing an
out-of-bounds access in parallel.

This makes the exterior facet handling in PatchPC identical to the interior
facet case, removing the need for the previously introduced ad-hoc extended
array in patch.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
getDepthStratum(1) gives edges in 2D but cells in 1D, so its fEnd is
too small for 1D meshes where facets are vertices at depth 0. Use
dm.getChart() instead to get pEnd, which covers all entity types and
ensures any valid plex point index is within the array bounds.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Contributor

@connorjward connorjward left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I think I am now happy with the implementation. @pbrubeck are the tests reasonable?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants