Skip to content

White noise generators and AR covariance operators#4716

Merged
JHopeCollins merged 35 commits intomainfrom
JHopeCollins/moar-covariance-operators
Feb 5, 2026
Merged

White noise generators and AR covariance operators#4716
JHopeCollins merged 35 commits intomainfrom
JHopeCollins/moar-covariance-operators

Conversation

@JHopeCollins
Copy link
Copy Markdown
Member

@JHopeCollins JHopeCollins commented Nov 10, 2025

Depends on #4773 and firedrakeproject/fiat#201.

White and correlated noise generation from #3799, plus extra methods to calculate action and inverse of autoregressive covariance operators and use it for a weighted norm.

White noise generation

See Croci et al 2018.

White noise samples calculating $M^{1/2}z$ where $z_{i}\sim\mathcal{N}(0, I)$ and $M$ is the mass matrix.
$M^{1/2}$ can be constructed by assembling the Cholesky factor of the element-wise mass matrices.

There are two implementations of this:

  1. hijack the pyop2 and loopy assemble kernels to return the cholesky factor of the element mass matrix rather than the actual mass matrix. This is the implementation from JDBetteridge/randomfields rebased #3799
  2. Build a discontinuous superspace and use PETSc to calculate the global cholesky factor of that mass matrix, which is block diagonal over the elements. This uses the L2Cholesky class from Mesh independent optimization trick #4575. There is also a specialisation of this method for VertexOnlyMesh.

Autoregressive covariance operator

See Mirouze & Weaver, 2010

A covariance operator $B$ with an m-th autoregressive function kernel can be calculated using m Backward Euler steps of a diffusion operator, where the diffusion coefficient is specified by the correlation lengthscale.

If $M$ is the mass matrix, $K$ is the matrix for a single Backward Euler step, and $\lambda$ is a normalisation factor, then the m-th order covariance operator with variance $\sigma^2$ is:

$$B: V^{*} \to V = \sigma\lambda((K^{-1}M)^{m}M^{-1})\lambda\sigma$$

This formulation leads to an efficient implementation for $B^{-1}$ (using the action of $K$) and for $B^{1/2}$ by taking only m/2 steps of the diffusion operator. This can be used to calculate weighted norms $||x||_{B^{-1}}$ and sample from $\mathcal{N}(0,B)$.

The CovarianceOperatorBase class provides an abstract interface for other covariance operators, which must implement:

  1. sample to return $w\in\mathcal{N}(0,B)$
  2. apply_action(x) to apply $Bx$
  3. apply_inverse(x) to apply $^{-1}x$
  4. norm(x) to return $||x||_{B^{-1}} = x^{T}B^{-1}x$

PETSc python contexts

If you use the weighted norm $||x||_{B^{-1}}$ in the functional for an adjoint calculation, then often $B$ is a good preconditioner for the 2nd order Hessian. This PR provides a python Mat context and corresponding python PC context for children of CovarianceOperatorBase.

Co-authored-by: Jack Betteridge <j.betteridge@imperial.ac.uk>
@jrmaddison
Copy link
Copy Markdown
Contributor

L2Cholesky in #4575 might be useful here, e.g. for parallel

@JHopeCollins
Copy link
Copy Markdown
Member Author

L2Cholesky in #4575 might be useful here, e.g. for parallel

Of course, you can just grab the local part of the matrix because it's block diagonal so you don't actually need a parallel Cholesky factor.

I've added that PR to the Firedrake meeting agenda this afternoon.

Comment thread tests/firedrake/adjoint/test_covariance_operator.py Outdated
@JHopeCollins JHopeCollins marked this pull request as ready for review December 9, 2025 11:55
dham
dham previously requested changes Dec 9, 2025
Comment thread firedrake/adjoint/covariance_operator.py
Comment thread firedrake/adjoint/covariance_operator.py Outdated
Comment thread firedrake/adjoint/covariance_operator.py Outdated
Comment thread firedrake/adjoint/covariance_operator.py Outdated
Comment thread firedrake/adjoint/covariance_operator.py
Comment thread firedrake/adjoint/covariance_operator.py Outdated
Comment thread firedrake/adjoint/covariance_operator.py
Comment thread pyproject.toml Outdated
@JHopeCollins JHopeCollins requested a review from pbrubeck February 2, 2026 09:04
@JHopeCollins
Copy link
Copy Markdown
Member Author

@pbrubeck the only outstanding comments were to add an MMS test for the diffusion forms, which I've done here: https://github.com/firedrakeproject/firedrake/pull/4716/changes#diff-2327a64dc67ea8e9dd667c73fc315e5ca301b26fa9fcdaa08de4c6e94200fdb4R364

@JHopeCollins JHopeCollins dismissed dham’s stale review February 2, 2026 09:05

Changes implemented

Comment thread firedrake/adjoint/covariance_operator.py
Comment thread tests/firedrake/adjoint/test_covariance_operator.py Outdated
Comment thread tests/firedrake/adjoint/test_covariance_operator.py Outdated
Copy link
Copy Markdown
Contributor

@pbrubeck pbrubeck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with the maths, but this looks good in general.

@JHopeCollins JHopeCollins enabled auto-merge (squash) February 5, 2026 12:53
@JHopeCollins JHopeCollins disabled auto-merge February 5, 2026 14:22
@JHopeCollins
Copy link
Copy Markdown
Member Author

Only failures are unrelated timeouts.

@JHopeCollins JHopeCollins merged commit c15c92d into main Feb 5, 2026
6 of 7 checks passed
@JHopeCollins JHopeCollins deleted the JHopeCollins/moar-covariance-operators branch February 5, 2026 14:23
j-bowhay pushed a commit to j-bowhay/firedrake that referenced this pull request Feb 24, 2026
…4716)

* White noise generators and AR covariance operators

---------

Co-authored-by: Jack Betteridge <j.betteridge@imperial.ac.uk>
connorjward pushed a commit that referenced this pull request Mar 3, 2026
* White noise generators and AR covariance operators

---------

Co-authored-by: Jack Betteridge <j.betteridge@imperial.ac.uk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants