-
Notifications
You must be signed in to change notification settings - Fork 65
refactor: unify linear/quantization architecture and remove deprecate… #366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
qinyiqun
wants to merge
1
commit into
main
Choose a base branch
from
refactor/unify-linear-quantization
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,20 +1,7 @@ | ||
| #include "paged_compiler.hpp" | ||
| #include "../../global_state/global_state.hpp" | ||
| #include "../../utils.hpp" | ||
|
|
||
| namespace { | ||
| // Todo: replace with Tensor::zeros when it is available | ||
| inline void set_zeros(infinicore::Tensor &tensor) { | ||
| std::vector<uint8_t> zeros(tensor->nbytes(), 0); | ||
| infinicore::context::memcpyH2D(tensor->data(), zeros.data(), tensor->nbytes(), false); | ||
| } | ||
|
|
||
| inline void set_minus_one(infinicore::Tensor &tensor) { | ||
| // For int32 tensors, 0xFF bytes correspond to -1 in two's complement. | ||
| std::vector<uint8_t> minus_one(tensor->nbytes(), 0xFF); | ||
| infinicore::context::memcpyH2D(tensor->data(), minus_one.data(), tensor->nbytes(), false); | ||
| } | ||
|
|
||
| } // namespace | ||
| namespace infinilm::engine { | ||
| PagedCompiler::PagedCompiler(const std::shared_ptr<InfinilmModel> &model, RankBarrier *barrier) | ||
| : GraphCompiler(model, barrier) { | ||
|
|
@@ -61,7 +48,6 @@ void PagedCompiler::compile() { | |
| const size_t block_per_req = nblocks; | ||
| input.block_tables = block_tables_holder_->as_strided({b, block_per_req}, {(ptrdiff_t)block_per_req, 1}); | ||
| input.slot_mapping = infinicore::Tensor::empty({b}, infinicore::DataType::I64, infinicore::context::getDevice()); | ||
| set_zeros(input.slot_mapping.value()); | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这行代码为什么要删了,不用重置input.slot_mapping了么 |
||
|
|
||
| // Attention reads attn_metadata from thread-local forward context. | ||
| infinilm::global_state::get_forward_context().attn_metadata = { | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -10,46 +10,6 @@ | |
|
|
||
| namespace infinilm::engine { | ||
|
|
||
| /** | ||
| * @deprecated This function is deprecated and will be REMOVED in the next major release (v0.2.0). | ||
| * | ||
| * ⚠️ DEVELOPMENT POLICY: | ||
| * - NO new development or feature additions permitted on this interface | ||
| * - Only critical bug fixes (security/stability) allowed until removal | ||
| * - All new code MUST migrate to the polymorphic overload below | ||
| * | ||
| * Replacement: Use the polymorphic overload of this same function name with updated signature | ||
| * Reason: Legacy signature lacks support for dynamic quantization modes. | ||
| * Removal target: v0.2.0 (Q2 2026) | ||
| */ | ||
| RankWorker::RankWorker(const InfinilmModel::Config &model_config, | ||
| const distributed::RankInfo &rank_info, | ||
| const cache::CacheConfig *cache_config, | ||
| RankBarrier *barrier, | ||
| bool enable_graph_compiling, | ||
| backends::AttentionBackend attention_backend) | ||
| : legacy_model_config_(model_config), | ||
| rank_info_(rank_info), | ||
| attention_backend_(attention_backend), | ||
| enable_graph_compiling_(enable_graph_compiling), | ||
| job_cmd_(Command::INIT), | ||
| has_job_(false), | ||
| job_done_(false), | ||
| should_exit_(false), | ||
| init_done_(false), | ||
| rng_(std::random_device{}()), | ||
| barrier_(barrier) { | ||
| if (cache_config != nullptr) { | ||
| pending_cache_config_ = cache_config->unique_copy(); | ||
| } | ||
| // start the thread | ||
| thread_ = std::thread(&RankWorker::thread_loop, this); | ||
|
|
||
| // Wait until the worker thread finishes initialization (model created) | ||
| std::unique_lock<std::mutex> lk(mutex_); | ||
| cv_.wait(lk, [&] { return init_done_; }); | ||
| } | ||
|
|
||
| RankWorker::RankWorker( | ||
| std::shared_ptr<infinilm::global_state::InfinilmConfig> infinilm_config, | ||
| const distributed::RankInfo &rank_info, | ||
|
|
@@ -269,15 +229,6 @@ void RankWorker::thread_loop() { | |
| infinilm::global_state::initialize_infinilm_config(infinilm_config_); | ||
|
|
||
| // Create model using factory (may be expensive) | ||
| if (model_config_ == nullptr) { | ||
| // model_ = InfinilmModelFactory::createModel( | ||
| // legacy_model_config_, | ||
| // rank_info_, | ||
| // pending_cache_config_ != nullptr ? pending_cache_config_.get() : nullptr, | ||
| // attention_backend_); | ||
| throw std::runtime_error("RankWorker::thread_loop(): the way of creating models using LlamaConfig is no longer supported !!!"); | ||
| } | ||
|
|
||
| const std::string &model_type = model_config_->get<std::string>("model_type"); | ||
| const auto &model_map = models::get_causal_lm_model_map(); | ||
| auto it = model_map.find(model_type); | ||
|
|
@@ -287,16 +238,7 @@ void RankWorker::thread_loop() { | |
| rank_info_.device, | ||
| pending_cache_config_ != nullptr ? pending_cache_config_.get() : nullptr); | ||
| } else { | ||
| std::vector<std::string> classic_models = {"llama", "qwen2", "minicpm", "fm9g", "fm9g7b"}; | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这段classic_models代码暂时不要删除,如果要删除lama_legacy文件夹的话,应该单独提pr删除 |
||
| if ((std::find(classic_models.begin(), classic_models.end(), model_type) != classic_models.end())) { | ||
| model_ = InfinilmModelFactory::createModel( | ||
| model_config_, | ||
| rank_info_, | ||
| pending_cache_config_ != nullptr ? pending_cache_config_.get() : nullptr, | ||
| attention_backend_); | ||
| } else { | ||
| throw std::runtime_error("RankWorker::thread_loop(): Unsupported model config type: " + model_type); | ||
| } | ||
| throw std::runtime_error("RankWorker::thread_loop(): Unsupported model config type: " + model_type); | ||
| } | ||
|
|
||
| if (!model_) { | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kv cache需要置0么,这是哪个平台的要求么
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
国产芯片有脏内存,malloc内存不设置0会留下原来的数据。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kv_cache.cpp中有四个函数中有申请kv cache的, 那这四个函数都的加