Skip to content

Add FEniCSx solver for partitioned heat conduction tutorial#687

Open
NiklasVin wants to merge 11 commits intoprecice:developfrom
NiklasVin:partitioned-heat-fenicsx
Open

Add FEniCSx solver for partitioned heat conduction tutorial#687
NiklasVin wants to merge 11 commits intoprecice:developfrom
NiklasVin:partitioned-heat-fenicsx

Conversation

@NiklasVin
Copy link
Collaborator

@NiklasVin NiklasVin commented Dec 9, 2025

Checklist:

  • I added a summary of any user-facing changes (compared to the last release) in the changelog-entries/<PRnumber>.md.
  • I will remember to squash-and-merge, providing a useful summary of the changes of this PR.

@NiklasVin NiklasVin self-assigned this Dec 9, 2025
@NiklasVin NiklasVin changed the title Add partitioned heat conduction tutorial for FEniCSx Add FEniCSx solver for partitioned heat conduction tutorial Dec 9, 2025
@NiklasVin NiklasVin marked this pull request as ready for review January 24, 2026 20:00
@IshaanDesai IshaanDesai self-requested a review January 26, 2026 09:27
@IshaanDesai
Copy link
Member

This PR is to be merged only after fenicsxprecice is released and available via pip.

NiklasVin added a commit to precice/fenicsx-adapter that referenced this pull request Mar 4, 2026
Move the tutorials to https://github.com/precice/tutorials. See precice/tutorials#687, precice/tutorials#688 and precice/tutorials#714

---------

Co-authored-by: Ishaan Desai <ishaandesai@gmail.com>
NiklasVin added a commit to NiklasVin/tutorials that referenced this pull request Mar 9, 2026
Copy link
Member

@IshaanDesai IshaanDesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good in general. My suggestions are mainly cosmetic. I ran the tutorial and it runs, but at the end the error is higher than the tolerance.

L2-error: 3.56e+08
Error_max: 3.00e+10
Traceback (most recent call last):
  File "/home/desaiin/tutorials/partitioned-heat-conduction/dirichlet-fenicsx/../solver-fenicsx/heat.py", line 304, in <module>
    error, error_pointwise = compute_errors(u_n, u_ref, total_error_tol=error_tol)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/desaiin/tutorials/partitioned-heat-conduction/solver-fenicsx/errorcomputation.py", line 19, in compute_errors
    assert (error_L2 < total_error_tol)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

shall we lower the tolerance?

@NiklasVin
Copy link
Collaborator Author

NiklasVin commented Mar 15, 2026

Looks good in general. My suggestions are mainly cosmetic. I ran the tutorial and it runs, but at the end the error is higher than the tolerance.

L2-error: 3.56e+08
Error_max: 3.00e+10
Traceback (most recent call last):
  File "/home/desaiin/tutorials/partitioned-heat-conduction/dirichlet-fenicsx/../solver-fenicsx/heat.py", line 304, in <module>
    error, error_pointwise = compute_errors(u_n, u_ref, total_error_tol=error_tol)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/desaiin/tutorials/partitioned-heat-conduction/solver-fenicsx/errorcomputation.py", line 19, in compute_errors
    assert (error_L2 < total_error_tol)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

shall we lower the tolerance?

How did you run the tutorial?
On my machine, I get 10^-12 and 10^-14 as L2 error for Dirichlet and Neumann participants

@IshaanDesai
Copy link
Member

How did you run the tutorial? On my machine, I get 10^-12 and 10^-14 as L2 error for Dirichlet and Neumann participants

I ran the tutorial by running ./run.sh in the folders dirichlet-fenicsx/ and neumann-fenicsx/. Despite @NiklasVin having matching versions of the dependencies, the difference in the error computation persists. Perhaps we can ask one more person to try out the tutorial. @MakisH could you please try out the FEniCSx-FEniCSx case for this tutorial?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants