Skip to content

Commit 9110b09

Browse files
author
Sebastien Loisel
committed
Update documentation for HPCBackend API
- Fix docs/Project.toml UUID to match renamed package - Update all code examples to use backend parameter: - BACKEND_CPU_MPI for CPU with MPI - backend_metal_mpi(comm) for Metal GPU - backend_cuda_mpi(comm) for CUDA GPU - Update type parameter documentation from AV/AM to B<:HPCBackend - Add Backend Types section to API reference - Fix import order: MPI.Init() before using HPCLinearAlgebra - Update Julia version requirement to 1.11
1 parent 73ede94 commit 9110b09

7 files changed

Lines changed: 253 additions & 135 deletions

File tree

docs/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
33
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
44
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
5-
HPCLinearAlgebra = "5bdd2be4-ae34-42ef-8b36-f4c85d48f377"
5+
HPCLinearAlgebra = "537374f1-5608-4525-82fb-641dce542540"
66
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
77

88
[compat]

docs/src/api.md

Lines changed: 30 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -76,25 +76,42 @@ clear_mumps_analysis_cache!
7676
io0
7777
```
7878

79+
## Backend Types
80+
81+
```@docs
82+
HPCBackend
83+
DeviceCPU
84+
DeviceMetal
85+
DeviceCUDA
86+
CommSerial
87+
CommMPI
88+
SolverMUMPS
89+
BACKEND_CPU_MPI
90+
BACKEND_CPU_SERIAL
91+
backend_metal_mpi
92+
backend_cuda_mpi
93+
to_backend
94+
```
95+
7996
## Type Mappings
8097

81-
### Native to MPI Conversions
98+
### Native to Distributed Conversions
8299

83-
| Native Type | MPI Type | Description |
84-
|-------------|----------|-------------|
85-
| `Vector{T}` | `HPCVector{T,AV}` | Distributed vector |
86-
| `Matrix{T}` | `HPCMatrix{T,AM}` | Distributed dense matrix |
87-
| `SparseMatrixCSC{T,Ti}` | `HPCSparseMatrix{T,Ti,AV}` | Distributed sparse matrix |
100+
| Native Type | Distributed Type | Description |
101+
|-------------|------------------|-------------|
102+
| `Vector{T}` | `HPCVector{T,B}` | Distributed vector |
103+
| `Matrix{T}` | `HPCMatrix{T,B}` | Distributed dense matrix |
104+
| `SparseMatrixCSC{T,Ti}` | `HPCSparseMatrix{T,Ti,B}` | Distributed sparse matrix |
88105

89-
The `AV` and `AM` type parameters specify the underlying storage (`Vector{T}`/`Matrix{T}` for CPU, `MtlVector{T}`/`MtlMatrix{T}` for Metal GPU).
106+
The `B<:HPCBackend` type parameter specifies the backend configuration (device, communication, solver). Use pre-constructed backends like `BACKEND_CPU_MPI` or factory functions like `backend_cuda_mpi(comm)`.
90107

91-
### MPI to Native Conversions
108+
### Distributed to Native Conversions
92109

93-
| MPI Type | Native Type | Function |
94-
|----------|-------------|----------|
95-
| `HPCVector{T,AV}` | `Vector{T}` | `Vector(v)` |
96-
| `HPCMatrix{T,AM}` | `Matrix{T}` | `Matrix(A)` |
97-
| `HPCSparseMatrix{T,Ti,AV}` | `SparseMatrixCSC{T,Ti}` | `SparseMatrixCSC(A)` |
110+
| Distributed Type | Native Type | Function |
111+
|------------------|-------------|----------|
112+
| `HPCVector{T,B}` | `Vector{T}` | `Vector(v)` |
113+
| `HPCMatrix{T,B}` | `Matrix{T}` | `Matrix(A)` |
114+
| `HPCSparseMatrix{T,Ti,B}` | `SparseMatrixCSC{T,Ti}` | `SparseMatrixCSC(A)` |
98115

99116
## Supported Operations
100117

0 commit comments

Comments
 (0)