Skip to content

Load model error, which was converted by rllama/converter/converter.py #99

@HonestQiao

Description

@HonestQiao
  • PC: x86_64, Ubuntu 24.04
export RKNN_LLM_DIR=~/Projects/rk3576/rknn-llm
export MODELSCOPE_DIR=~/Projects/rk3576/Model-Scope/
export RKLLM_MODEL_DIR=~/Projects/rk3576/models
export RKLLAMA_DIR=~/Projects/rk3576/rkllama

cd ~/Projects
git clone https://github.com/NotPunchnox/rkllama.git
cd rkllama

model_group=deepseek-ai
model_name="DeepSeek-R1-Distill-Qwen-1.5B"

modelscope download --model $model_group/$model_name --local_dir $MODELSCOPE_DIR/$model_group/$model_name

cd $RKLLAMA_DIR/converter
pip install -r requirements.txt
python converter.py $MODELSCOPE_DIR/$model_group/$model_name \
    --output-dir $RKLLAMA_DIR/models/ \
    --max-context-len 4096 \
    --dtype "float16" \
    --device "cpu"

ls -lh $RKLLAMA_DIR/models/$model_name
* output:
-rw-rw-r-- 1 honestqiao 848M 12月 19 23:39 DeepSeek-R1-Distill-Qwen-1.5B.rkllm
-rw-rw-r-- 1 honestqiao  338 12月 19 23:39 metadata.json
-rw-rw-r-- 1 honestqiao  207 12月 19 23:39 Modelfile

Then, scp file from PC to RK3576.

  • RK3576, Armbian 25.11.2
export RKNN_LLM_DIR=~/Projects/rk3576/rknn-llm
export MODELSCOPE_DIR=~/Projects/rk3576/Model-Scope/
export RKLLM_MODEL_DIR=~/Projects/rk3576/models
export RKLLAMA_DIR=~/Projects/rk3576/rkllama

cd ~/Projects
git clone https://github.com/NotPunchnox/rkllama.git
cd rkllama
pip install .

model_group=deepseek-ai
model_name="DeepSeek-R1-Distill-Qwen-1.5B"

rkllama_server --debug --models $(pwd)/models/
  • PC:
rkllama_client load DeepSeek-R1-Distill-Qwen-1.5B/DeepSeek-R1-Distill-Qwen-1.5B
* PC output:
Error loading model: 400 - Model directory 'DeepSeek-R1-Distill-Qwen-1.5B/DeepSeek-R1-Distill-Qwen-1.5B' not found.
* RK3576 output:
2025-12-19 23:45:27,704 - rkllama.worker - INFO - Models Monitor running.
Debug mode enabled
2025-12-19 23:45:27,747 - rkllama.config - INFO - Current RKLLAMA Configuration:
2025-12-19 23:45:27,748 - rkllama.config - INFO - [server]
2025-12-19 23:45:27,748 - rkllama.config - INFO -   port = 8080
2025-12-19 23:45:27,749 - rkllama.config - INFO -   host = 0.0.0.0
2025-12-19 23:45:27,749 - rkllama.config - INFO -   debug = True
2025-12-19 23:45:27,749 - rkllama.config - INFO - [paths]
2025-12-19 23:45:27,750 - rkllama.config - INFO -   models = /home/honestqiao/Projects/rk3576/rkllama/models/
2025-12-19 23:45:27,750 - rkllama.config - INFO -   logs = logs
2025-12-19 23:45:27,751 - rkllama.config - INFO -   data = data
2025-12-19 23:45:27,751 - rkllama.config - INFO -   src = src
2025-12-19 23:45:27,751 - rkllama.config - INFO -   lib = lib
2025-12-19 23:45:27,752 - rkllama.config - INFO -   temp = temp
2025-12-19 23:45:27,752 - rkllama.config - INFO - [model]
2025-12-19 23:45:27,752 - rkllama.config - INFO -   default =
2025-12-19 23:45:27,753 - rkllama.config - INFO -   default_temperature = 0.5
2025-12-19 23:45:27,753 - rkllama.config - INFO -   default_enable_thinking = False
2025-12-19 23:45:27,753 - rkllama.config - INFO -   default_num_ctx = 16384
2025-12-19 23:45:27,754 - rkllama.config - INFO -   default_max_new_tokens = 16384
2025-12-19 23:45:27,754 - rkllama.config - INFO -   default_top_k = 7
2025-12-19 23:45:27,754 - rkllama.config - INFO -   default_top_p = 0.5
2025-12-19 23:45:27,754 - rkllama.config - INFO -   default_repeat_penalty = 1.1
2025-12-19 23:45:27,755 - rkllama.config - INFO -   default_frequency_penalty = 0.0
2025-12-19 23:45:27,755 - rkllama.config - INFO -   default_presence_penalty = 0.0
2025-12-19 23:45:27,755 - rkllama.config - INFO -   default_mirostat = 0
2025-12-19 23:45:27,755 - rkllama.config - INFO -   default_mirostat_tau = 3
2025-12-19 23:45:27,756 - rkllama.config - INFO -   default_mirostat_eta = 0.1
2025-12-19 23:45:27,756 - rkllama.config - INFO -   max_minutes_loaded_in_memory = 30
2025-12-19 23:45:27,756 - rkllama.config - INFO -   max_number_models_loaded_in_memory = 10
2025-12-19 23:45:27,757 - rkllama.config - INFO - [platform]
2025-12-19 23:45:27,757 - rkllama.config - INFO -   processor = rk3588
Start the API at http://localhost:8080
 * Serving Flask app 'rkllama.server.server'
 * Debug mode: on
2025-12-19 23:45:27,776 - werkzeug - INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:8080
 * Running on http://192.168.1.197:8080
2025-12-19 23:45:27,777 - werkzeug - INFO - Press CTRL+C to quit
2025-12-19 23:45:27,780 - werkzeug - INFO -  * Restarting with stat
2025-12-19 23:45:40,950 - rkllama.worker - INFO - Models Monitor running.
2025-12-19 23:45:40,992 - rkllama.config - DEBUG - Generated shell configuration: /home/honestqiao/miniforge3/envs/rkllama/lib/python3.11/site-packages/rkllama/config/config/config.env
2025-12-19 23:45:40,994 - rkllama.config - DEBUG - Generated shell configuration: /home/honestqiao/miniforge3/envs/rkllama/lib/python3.11/site-packages/rkllama/config/config/config.env
Debug mode enabled
2025-12-19 23:45:40,996 - rkllama.config - INFO - Current RKLLAMA Configuration:
2025-12-19 23:45:40,996 - rkllama.config - INFO - [server]
2025-12-19 23:45:40,997 - rkllama.config - INFO -   port = 8080
2025-12-19 23:45:40,997 - rkllama.config - INFO -   host = 0.0.0.0
2025-12-19 23:45:40,998 - rkllama.config - INFO -   debug = True
2025-12-19 23:45:40,998 - rkllama.config - INFO - [paths]
2025-12-19 23:45:40,998 - rkllama.config - INFO -   models = /home/honestqiao/Projects/rk3576/rkllama/models/
2025-12-19 23:45:40,999 - rkllama.config - INFO -   logs = logs
2025-12-19 23:45:40,999 - rkllama.config - INFO -   data = data
2025-12-19 23:45:40,999 - rkllama.config - INFO -   src = src
2025-12-19 23:45:40,999 - rkllama.config - INFO -   lib = lib
2025-12-19 23:45:41,000 - rkllama.config - INFO -   temp = temp
2025-12-19 23:45:41,000 - rkllama.config - INFO - [model]
2025-12-19 23:45:41,000 - rkllama.config - INFO -   default =
2025-12-19 23:45:41,000 - rkllama.config - INFO -   default_temperature = 0.5
2025-12-19 23:45:41,001 - rkllama.config - INFO -   default_enable_thinking = False
2025-12-19 23:45:41,001 - rkllama.config - INFO -   default_num_ctx = 16384
2025-12-19 23:45:41,001 - rkllama.config - INFO -   default_max_new_tokens = 16384
2025-12-19 23:45:41,002 - rkllama.config - INFO -   default_top_k = 7
2025-12-19 23:45:41,002 - rkllama.config - INFO -   default_top_p = 0.5
2025-12-19 23:45:41,002 - rkllama.config - INFO -   default_repeat_penalty = 1.1
2025-12-19 23:45:41,002 - rkllama.config - INFO -   default_frequency_penalty = 0.0
2025-12-19 23:45:41,002 - rkllama.config - INFO -   default_presence_penalty = 0.0
2025-12-19 23:45:41,003 - rkllama.config - INFO -   default_mirostat = 0
2025-12-19 23:45:41,003 - rkllama.config - INFO -   default_mirostat_tau = 3
2025-12-19 23:45:41,003 - rkllama.config - INFO -   default_mirostat_eta = 0.1
2025-12-19 23:45:41,003 - rkllama.config - INFO -   max_minutes_loaded_in_memory = 30
2025-12-19 23:45:41,003 - rkllama.config - INFO -   max_number_models_loaded_in_memory = 10
2025-12-19 23:45:41,004 - rkllama.config - INFO - [platform]
2025-12-19 23:45:41,004 - rkllama.config - INFO -   processor = rk3588
Start the API at http://localhost:8080
2025-12-19 23:45:41,019 - werkzeug - WARNING -  * Debugger is active!
2025-12-19 23:45:41,021 - werkzeug - INFO -  * Debugger PIN: 871-707-978
2025-12-19 23:45:43,078 - werkzeug - INFO - 127.0.0.1 - - [19/Dec/2025 23:45:43] "GET / HTTP/1.1" 200 -
2025-12-19 23:45:43,092 - werkzeug - INFO - 127.0.0.1 - - [19/Dec/2025 23:45:43] "GET /models HTTP/1.1" 200 -
2025-12-19 23:45:48,992 - werkzeug - INFO - 127.0.0.1 - - [19/Dec/2025 23:45:48] "GET / HTTP/1.1" 200 -
FROM: DeepSeek-R1-Distill-Qwen-1.5B.rkllm
HuggingFace Path: /home/honestqiao/Projects/rk3576/Model-Scope//deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
2025-12-19 23:45:49,052 - rkllama.rkllm - DEBUG - Initializing RKLLM model from /home/honestqiao/Projects/rk3576/rkllama/models/DeepSeek-R1-Distill-Qwen-1.5B/DeepSeek-R1-Distill-Qwen-1.5B.rkllm with options: {'temperature': '0.7', 'num_ctx': '16384', 'max_new_tokens': '16384', 'top_k': '7', 'top_p': '0.5', 'repeat_penalty': '1.1', 'frequency_penalty': '0.0', 'presence_penalty': '0.0', 'mirostat': '0', 'mirostat_tau': '3', 'mirostat_eta': '0.1', 'from': '"DeepSeek-R1-Distill-Qwen-1.5B.rkllm"', 'huggingface_path': '"/home/honestqiao/Projects/rk3576/Model-Scope//deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"', 'system': '"You are a helpful AI assistant."'}
I rkllm: rkllm-runtime version: 1.2.3, rknpu driver version: 0.9.8, platform: RK3576
I rkllm: loading rkllm model from /home/honestqiao/Projects/rk3576/rkllama/models/DeepSeek-R1-Distill-Qwen-1.5B/DeepSeek-R1-Distill-Qwen-1.5B.rkllm
E rkllm: invalid rkllm model!
2025-12-19 23:45:49,067 - rkllama.worker - ERROR - Failed creating the worker for model 'DeepSeek-R1-Distill-Qwen-1.5B': Failed to initialize RKLLM model: -1
2025-12-19 23:45:49,081 - werkzeug - INFO - 127.0.0.1 - - [19/Dec/2025 23:45:49] "POST /load_model HTTP/1.1" 400 -
2025-12-19 23:45:58,038 - werkzeug - INFO - 127.0.0.1 - - [19/Dec/2025 23:45:58] "GET / HTTP/1.1" 200 -
2025-12-19 23:45:58,049 - werkzeug - INFO - 127.0.0.1 - - [19/Dec/2025 23:45:58] "POST /load_model HTTP/1.1" 400 -

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions