Skip to content

Latest commit

 

History

History
29 lines (22 loc) · 866 Bytes

File metadata and controls

29 lines (22 loc) · 866 Bytes

ComfyUI-llama-cpp

Run LLM/VLM models natively in ComfyUI based on llama.cpp
[📃中文版]

Changelog

2025-11-03

  • Initial release, added support for Qwen3-VL

Preview

Installation

Install the node:

cd ComfyUI/custom_nodes
git clone https://github.com/lihaoyun6/ComfyUI-llama-cpp.git
python -m pip install -r ComfyUI-llama-cpp/requirements.txt

Download models:

  • Place your model files in the ComfyUI/models/LLM folder.

If you need a VLM model to process image input, don't forget to download the mmproj weights.

Credits