feat: support chunkprefill and prefill cuda graph#371
Open
Simon12345777 wants to merge 2 commits into
Open
Conversation
wooway777
reviewed
May 12, 2026
| @@ -1,12 +1,13 @@ | |||
| #pragma once | |||
|
|
|||
| #include "chunk_prefill_compiler.hpp" | |||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.


Summary
C++ 端引入 chunked-prefill 编译路径:新增 csrc/engine/compiler/chunk_prefill_compiler.{hpp,cpp};GeneralCompiler 拼装 ChunkPrefillCompiler 并在 get_compiled() 中先于 decode 匹配;新增 enable_chunk_prefill_graph 开关贯通 InferEngine → RankWorker → GeneralCompiler。
pybind11 与 Python InferEngine 包装层(csrc/pybind11/engine/engine.hpp、python/infinilm/infer_engine.py)将 enable_chunk_prefill_graph 透传到底层。
脚本层移植:把 InfiniGraph/InfiniLM/scripts 中 infer_task.py 与 launch_server.py 的 chunked-prefill 调度逻辑(setup_chunked_prefill / advance_prefill_chunk、--chunk-size、优先级 worker_loop)原样移植到 hds/InfiniLM/scripts/,并保留 hds 既有的 KVCache.init 签名以兼容 jiuge*/deepseek/qwen3vl 的 create_kv_cache()。
Python 服务侧 chunked-prefill 调度(接通 enable_chunk_prefill_graph):
llm/request.py 给 InferenceRequest 增加 chunk_size、chunk_prefill_offset 与 is_chunking() / chunk_is_last()。
llm/scheduler.py 新增 chunking_queue,schedule() 采用三级优先级:running(decode)> chunking(续片)> waiting(新请求);长 prompt 进入 chunking 时以 batch=1 单请求返回,匹配 C++ 侧 (batch_size, chunk_size) 预编图签名。
processors/basic_llm_processor.py 的 paged 分支按 chunk_prefill_offset / chunk_size 切片 input_ids / position_ids / slot_mapping,并设置正确的 past_kv_lengths / total_kv_lengths。
llm/llm.py::_update_requests 识别 chunk 中间步骤:不消费采样 token、不触发 reset_req_blocks,仅推进 offset 后 requeue_chunking;最后一片走正常路径并清零 chunk 状态。
配置链路:EngineConfig / LLM / LLMEngine / AsyncLLMEngine / InferenceServer 增加 chunk_size;BaseConfig 增加 --chunk-size(默认 512)与 --enable-chunk-prefill-graph。
Motivation
InfiniGraph 分支上 csrc 已经有 chunk-prefill 图编译路径与配套的 infer_task.py / launch_server.py 分片调度,但 hds 分支只有传统 prefill。本 PR 把整条链路移到 hds:底层补齐 ChunkPrefillCompiler 与 enable_chunk_prefill_graph 开关,并在 inference_server.py 这条新链路里以原生方式实现 chunked-prefill 调度,使开关一打开就能命中预编图,从而:
降低首 token 延迟时的瞬时显存压力(长 prompt 不必一次性进入 forward);
与 paged KV cache 一致地复用 C++ 侧预编 (batch_size, chunk_size) 图。
Closes #
Type of Change
feat— new feature / new modelfix— bug fixperf— performance improvement (no behavioral change)refactor— code restructuring without behavior changetest— adding or fixing tests onlydocs— documentation onlybuild/ci— build system or CI configurationchore— tooling, formatting, or other non-code changesTest Results of Involved Models on Supported Platforms (Please attach screenshots)
Benchmark / Performance Impact
Notes for Reviewers
Checklist
Title, Branch, and Commits
feat(nvidia): …,fix(cuda/gemm): …).<type>/xxx-yyyy-zzzzwhere<type>matches the PR title's Conventional Commits type and words are joined with hyphens (seeCONTRIBUTING.md§Branches).CONTRIBUTING.md§Pull Requests).main— the branch is rebased cleanly on top of the currentmain.fixup!/squash!/wipcommits remain.Scope and Design
CONTRIBUTING.md§Code/General).printf/std::cout/print(...)left behind, orTODOwithout an owner and issue link.General Code Hygiene (applies to all languages)
CONTRIBUTING.md§Code/General).CONTRIBUTING.md§Code/General).the `seqlens_k` tensor) (CONTRIBUTING.md§Code/General).CONTRIBUTING.md§Code/General).CONTRIBUTING.md§Code/General; §Python).C++ Specific (if C++ files changed)
CONTRIBUTING.md§C++).CONTRIBUTING.md§C++).new/delete; RAII / smart pointers / existing allocators are used.scripts/format.py.csrc/models/llama_legacy/.Python Specific (if Python files changed)
CONTRIBUTING.md§Python).CONTRIBUTING.md§Python).scripts/format.py.python/infinilm/auto_config.py.Testing
examples/test_infer.py), or specify the reason for skipping.examples/bench.py), or specify the reason for skipping.test/bench/test_benchmark.py), or specify the reason for skipping.python/infinilm/server/inference_server.py+scripts/test_perf.py), or specify the reason for skipping.Build, CI, and Tooling
Documentation
README.md,CONTRIBUTING.md, or inline docs updated when behavior, build flags, or developer workflow changed.!orBREAKING CHANGE:footer.Security and Safety