Skip to content

Commit eac40d6

Browse files
committed
docs: add Windows HIP build instructions and troubleshooting
Signed-off-by: Khushi <khushisingh82072@gmail.com>
1 parent 365877d commit eac40d6

1 file changed

Lines changed: 22 additions & 0 deletions

File tree

docs/contribute/source/plugin/wasi_nn.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -327,6 +327,13 @@ Then developers can build by following the steps.
327327
& "C:\Program files\CMake\bin\cmake.exe" -Bbuild -GNinja -DCMAKE_BUILD_TYPE=Release -DWASMEDGE_PLUGIN_WASI_NN_BACKEND=ggml -DWASMEDGE_USE_LLVM=OFF .
328328
& "<the ninja-build folder>\ninja.exe" -C build
329329
```
330+
- If you want to enable HIP (AMD GPU):
331+
332+
```console
333+
# HIP ENABLE:
334+
& "C:\Program files\CMake\bin\cmake.exe" -Bbuild -GNinja -DCMAKE_BUILD_TYPE=Release -DWASMEDGE_PLUGIN_WASI_NN_BACKEND=ggml -DWASMEDGE_PLUGIN_WASI_NN_GGML_LLAMA_HIP=ON -DWASMEDGE_USE_LLVM=OFF .
335+
& "<the ninja-build folder>\ninja.exe" -C build
336+
```
330337

331338
#### Execute the WASI-NN plugin with the llama example on Windows
332339

@@ -344,6 +351,21 @@ Then developers can build by following the steps.
344351
wget https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF/blob/main/Meta-Llama-3-8B-Instruct.Q5_K_M.gguf
345352
wasmedge --dir .:. --env llama3=true --env n_gpu_layers=100 --nn-preload default:GGML:AUTO:Meta-Llama-3-8B-Instruct.Q5_K_M.gguf wasmedge-ggml-llama.wasm default
346353
```
354+
#### Troubleshooting: AMD Radeon Integrated Graphics (Windows)
355+
356+
If you are building the GGML backend on Windows with an integrated AMD GPU (e.g., Radeon 780M / gfx1103) and encounter `rocBLAS` or initialization errors, you may need the following workarounds:
357+
358+
1. **Set the Architecture Override:**
359+
The RDNA 3 integrated graphics require an override to match the available ROCm kernels.
360+
```powershell
361+
$env:HSA_OVERRIDE_GFX_VERSION = "11.0.0"
362+
```
363+
364+
2. **Manual Library Linking (ROCm 6.x):** The ROCm installer on Windows may not create the necessary symlinks for `gfx1103`. If you see errors related to missing `TensileLibrary` files:
365+
366+
* Navigate to your ROCm installation folder (e.g., `C:\Program Files\AMD\ROCm\6.x\bin\rocblas\library`).
367+
* Copy the `gfx1100` files and rename them to `gfx1103`.
368+
* Example: Copy `TensileLibrary_lazy_gfx1100.dat` -> `TensileLibrary_lazy_gfx1103.dat`
347369

348370
### Appendix for llama.cpp backend
349371

0 commit comments

Comments
 (0)