Summary
Split from #332 — add ChatGLM causal LM model support only.
Tasks
- Add ChatGLM model config adapter (
csrc/models/chatglm/)
- Extend
rank_worker.cpp condition to include "chatglm"
- Register
"chatglm" in config_factory.cpp classic_models list
- Register
"chatglm" in python/infinilm/auto_config.py
- Add ChatGLM weight remapping (key rename + QKV/FFN split) in
modeling_utils.py
- Update
examples/jiuge.py for ChatGLM tokenizer special token handling
Depends on: #349 (GLM4)
Parent issue: #332
Summary
Split from #332 — add ChatGLM causal LM model support only.
Tasks
csrc/models/chatglm/)rank_worker.cppcondition to include"chatglm""chatglm"inconfig_factory.cppclassic_models list"chatglm"inpython/infinilm/auto_config.pymodeling_utils.pyexamples/jiuge.pyfor ChatGLM tokenizer special token handlingDepends on: #349 (GLM4)
Parent issue: #332