> [llama.cpp](https://github.com/ggml-org/llama.cpp) (GGUF) — C/C++, runs on Mac (Metal), Linux/Windows (CUDA), and CPU. Despite the README claimed to support CPU. Can't find any info on CPU only build for this fork. Maybe cpu-only is not on the plan?
Despite the README claimed to support CPU. Can't find any info on CPU only build for this fork. Maybe cpu-only is not on the plan?