Run LLM/VLM models natively in ComfyUI based on llama.cpp
[📃中文版]
cd ComfyUI/custom_nodes
git clone https://github.com/lihaoyun6/ComfyUI-llama-cpp.git
python -m pip install -r ComfyUI-llama-cpp/requirements.txt-
Place your model files in the
ComfyUI/models/LLMfolder.If you need a VLM model to process image input, don't forget to download the
mmprojweights.
- llama-cpp-python @JamePeng
- ComfyUI-llama-cpp @kijai
- ComfyUI @comfyanonymous
