Schematize config loading and quantizer config entries#1405
Schematize config loading and quantizer config entries#1405shengliangxu wants to merge 13 commits intomainfrom
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
Note Currently processing new changes in this PR. This may take a few minutes, please wait... ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (28)
📝 WalkthroughWalkthroughThis PR modernizes the quantization configuration system by converting TypedDict schemas to Pydantic models, implementing type-safe config loading with schema-backed overloads, and broadening type checks from ChangesQuantization Configuration Schema Modernization
🎯 4 (Complex) | ⏱️ ~75 minutes Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error)
✅ Passed checks (5 passed)
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #1405 +/- ##
==========================================
- Coverage 77.27% 77.23% -0.04%
==========================================
Files 478 478
Lines 51404 51529 +125
==========================================
+ Hits 39723 39799 +76
- Misses 11681 11730 +49
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
066c2c8 to
8e8153a
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
modelopt/torch/quantization/config.py (1)
1093-1114:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPreserve list-valued
cfgfor legacynn.*-scoped entries.This branch unconditionally does
dict(sub_cfg)for non-typed sub-configs, which blows up on sequential configs like{"nn.Linear": {"*weight_quantizer": [{...}, {...}]}}. The non-scoped legacy path still accepts list-valued cfgs, so this regresses class-scoped legacy configs only.Suggested fix
for q_path, sub_cfg in value.items(): if isinstance(sub_cfg, QuantizerAttributeConfig): enable = None cfg = sub_cfg - else: + elif isinstance(sub_cfg, Mapping): sub_cfg = dict(sub_cfg) enable = sub_cfg.pop("enable", None) cfg = sub_cfg or None + else: + enable = None + cfg = sub_cfg entry: dict[str, Any] = { "parent_class": key, "quantizer_name": q_path, "cfg": cfg, }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@modelopt/torch/quantization/config.py` around lines 1093 - 1114, The nn.* branch currently does unguarded dict(sub_cfg) which breaks when a sub_cfg is a list (legacy sequential configs); in the loop inside the key.startswith("nn.") branch (variables: q_path, sub_cfg, entries, entry, parent_class, quantizer_name, cfg, enable, QuantizerAttributeConfig) change the logic so you only coerce sub_cfg to dict when it's a Mapping (e.g., isinstance(sub_cfg, Mapping)); for non-Mapping values (not a QuantizerAttributeConfig and not a Mapping) preserve sub_cfg as-is (so list-valued cfgs remain lists), then extract enable only from Mapping-based dicts and set cfg accordingly before building and appending each entry returned by this branch.modelopt/torch/quantization/nn/modules/tensor_quantizer.py (1)
1431-1437:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate attribute-list length before applying sequential configs.
At Line 1435,
zip(attributes, self)silently drops extras and leaves trailing quantizers unchanged when lengths differ. This should fail fast to prevent partial/implicit config application.Proposed fix
if not isinstance(attributes, (list, tuple)): assert isinstance(attributes, Mapping), "attributes must be a list or a mapping." attributes = [attributes] * len(self) + elif len(attributes) != len(self): + raise ValueError( + f"Expected {len(self)} attribute configs, but got {len(attributes)}." + ) for attribute, quantizer in zip(attributes, self): quantizer.set_from_attribute_config(attribute)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@modelopt/torch/quantization/nn/modules/tensor_quantizer.py` around lines 1431 - 1437, When applying a sequence of attribute configs in tensor_quantizer.py, validate that a provided attributes list/tuple has the same length as the quantizer sequence (self) before zipping; if lengths differ raise a clear exception (e.g., ValueError) stating the expected and actual lengths so the call to set_from_attribute_config cannot silently skip trailing quantizers. Specifically, in the method containing the current attributes handling and the loop that calls set_from_attribute_config, add a length check for isinstance(attributes, (list, tuple)) and raise on mismatch.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@examples/llm_ptq/example_utils.py`:
- Around line 252-255: The phi4mm path is reverting typed QuantizeConfig entries
to plain dicts by appending raw dicts to quant_cfg_obj["quant_cfg"]; instead
append properly constructed QuantizerCfgEntry objects so the config remains
fully typed. Locate the code that mutates quant_cfg_obj (the block adding
entries for "*speech*", "*audio*", "*image*", "*vision*") and replace those dict
appends with creation of QuantizerCfgEntry instances (matching the type used
elsewhere) and append those instances; ensure build_quant_cfg() returns a
QuantizeConfig with quant_cfg containing only QuantizerCfgEntry objects on the
phi4mm path.
In `@examples/vllm_serve/vllm_ptq_utils.py`:
- Around line 122-123: The generator expression scanning kv_quant_cfg calls
e.get("quantizer_name") without ensuring e is a mapping; modify the predicate
used where kv_quant_cfg is scanned (the generator/filter that looks for
"quantizer_name" == "*[kv]_bmm_quantizer") to first check the entry type (e.g.,
isinstance(e, dict) or isinstance(e, collections.abc.Mapping) or hasattr(e,
"get")) before calling .get, so malformed/non-mapping entries are skipped and do
not raise an AttributeError.
In `@modelopt/torch/opt/config.py`:
- Around line 120-133: Modify __delitem__ to adhere to MutableMapping semantics
by converting AttributeError into KeyError when resolving the mapping key and
when refusing to unset a field with no default: wrap the call to
get_field_name_from_key(key) so any AttributeError raised is caught and
re-raised as KeyError(key), and replace the final raise AttributeError(...) that
checks for PydanticUndefined with raise KeyError(key) (including a descriptive
message if desired); keep the existing logic for handling model_extra,
model_fields_set, and using field_info.get_default(call_default_factory=True)
and PydanticUndefined.
In `@modelopt/torch/quantization/config.py`:
- Around line 552-574: The validator method validate_quantizer_cfg_entry
currently treats explicit None for the 'cfg' or 'enable' fields as if they were
omitted, allowing entries like {"quantizer_name":"*","enable":None} or
{"quantizer_name":"*","cfg":None} to behave as enabled; change the logic to
reject explicit nulls: if 'cfg' in values and values['cfg'] is None raise
ValueError("cfg must be omitted or a valid mapping/list, not null"), and if
'enable' in values and values['enable'] is None raise ValueError("enable must be
a boolean when provided, not null"); preserve existing behavior for omitted keys
and keep calling cls._validate_enabled_cfg(cfg) only when enable is truthy and
cfg is not None; apply the same null-rejection checks to the analogous validator
block referenced at lines 1157-1197.
In `@modelopt/torch/quantization/conversion.py`:
- Line 252: The code forces legacy dict-shaped quant_cfg through list() before
calling normalize_quant_cfg_list, which converts {"*": {"enable": False}} into
["*"] and breaks backward-compatible handling in normalize_quant_cfg_list and
downstream set_quantizer_by_cfg / set_quantizer_by_cfg_context; fix by removing
the list(...) wrapper and pass quant_cfg directly into normalize_quant_cfg_list
(and apply the same change for the other occurrence around line 499) so the
function can accept both legacy flat dicts and list forms.
In `@modelopt/torch/quantization/utils/core_utils.py`:
- Around line 935-939: The code is appending kv_cache_quant_cfg entries directly
into updated_quant_cfg["quant_cfg"], risking shared mutable state; modify the
concatenation in core_utils.py so you deep-copy each entry from
kv_cache_quant_cfg (or perform a deepcopy of the sequence) before extending
updated_quant_cfg["quant_cfg"] (i.e., when creating inner and assigning
updated_quant_cfg["quant_cfg"] = inner + ...), ensuring entries like
QuantizerCfgEntry instances from kv_cache_quant_cfg are duplicated rather than
referenced.
---
Outside diff comments:
In `@modelopt/torch/quantization/config.py`:
- Around line 1093-1114: The nn.* branch currently does unguarded dict(sub_cfg)
which breaks when a sub_cfg is a list (legacy sequential configs); in the loop
inside the key.startswith("nn.") branch (variables: q_path, sub_cfg, entries,
entry, parent_class, quantizer_name, cfg, enable, QuantizerAttributeConfig)
change the logic so you only coerce sub_cfg to dict when it's a Mapping (e.g.,
isinstance(sub_cfg, Mapping)); for non-Mapping values (not a
QuantizerAttributeConfig and not a Mapping) preserve sub_cfg as-is (so
list-valued cfgs remain lists), then extract enable only from Mapping-based
dicts and set cfg accordingly before building and appending each entry returned
by this branch.
In `@modelopt/torch/quantization/nn/modules/tensor_quantizer.py`:
- Around line 1431-1437: When applying a sequence of attribute configs in
tensor_quantizer.py, validate that a provided attributes list/tuple has the same
length as the quantizer sequence (self) before zipping; if lengths differ raise
a clear exception (e.g., ValueError) stating the expected and actual lengths so
the call to set_from_attribute_config cannot silently skip trailing quantizers.
Specifically, in the method containing the current attributes handling and the
loop that calls set_from_attribute_config, add a length check for
isinstance(attributes, (list, tuple)) and raise on mismatch.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: e15d92dc-470c-4dd5-a1bf-b8509a4179d2
📒 Files selected for processing (25)
docs/source/guides/_quant_cfg.rstexamples/diffusers/quantization/config.pyexamples/diffusers/quantization/quantize.pyexamples/llm_autodeploy/run_auto_quantize.pyexamples/llm_ptq/cast_mxfp4_to_nvfp4.pyexamples/llm_ptq/example_utils.pyexamples/llm_ptq/hf_ptq.pyexamples/llm_ptq/multinode_ptq.pyexamples/vllm_serve/vllm_ptq_utils.pymodelopt/onnx/llm_export_utils/quantization_utils.pymodelopt/recipe/config.pymodelopt/recipe/loader.pymodelopt/torch/opt/config.pymodelopt/torch/opt/config_loader.pymodelopt/torch/quantization/algorithms.pymodelopt/torch/quantization/backends/fp8_per_tensor_gemm.pymodelopt/torch/quantization/backends/nvfp4_gemm.pymodelopt/torch/quantization/config.pymodelopt/torch/quantization/conversion.pymodelopt/torch/quantization/mode.pymodelopt/torch/quantization/model_quant.pymodelopt/torch/quantization/nn/modules/tensor_quantizer.pymodelopt/torch/quantization/utils/core_utils.pytests/unit/recipe/test_loader.pytests/unit/torch/quantization/test_config_validation.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modelopt/torch/opt/config.py (1)
60-125:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFix
MutableMappingprotocol violation:__getitem__must raiseKeyError, notAttributeError.After
ModeloptBaseConfiginheritsMutableMapping[str, Any]at line 60, missing-key lookups propagateAttributeErrorfromget_field_name_from_key(). This violates the mapping contract, breaking inherited methods likepop()which expectKeyError. Proof: when__getitem__raisesAttributeError,MutableMapping.pop(key, default)fails to use the default value.Currently,
__delitem__andget()already catchAttributeErrorand convert toKeyError—this inconsistency exposes the protocol violation.Proposed minimal fix
diff --git a/modelopt/torch/opt/config.py b/modelopt/torch/opt/config.py @@ -99,7 +99,7 @@ class ModeloptBaseConfig(BaseModel, MutableMapping[str, Any]): if field_info.alias == key: return name - raise AttributeError(f"Key {key} not found in the config.") + raise KeyError(key) def __contains__(self, key: str) -> bool: @@ -107,7 +107,7 @@ class ModeloptBaseConfig(BaseModel, MutableMapping[str, Any]): self.get_field_name_from_key(key) return True - except AttributeError: + except KeyError: return False def __delitem__(self, key: str) -> None: @@ -121,7 +121,7 @@ class ModeloptBaseConfig(BaseModel, MutableMapping[str, Any]): try: field_name = self.get_field_name_from_key(key) - except AttributeError as e: + except KeyError as e: raise KeyError(key) from e def get(self, key: str, default: Any = None) -> Any: @@ -129,7 +129,7 @@ class ModeloptBaseConfig(BaseModel, MutableMapping[str, Any]): try: return self[key] - except AttributeError: + except KeyError: return default🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@modelopt/torch/opt/config.py` around lines 60 - 125, The mapping methods violate MutableMapping because get_field_name_from_key() can raise AttributeError; wrap calls to get_field_name_from_key() in __getitem__ and __setitem__ (and anywhere else that should follow mapping semantics) to catch AttributeError and re-raise KeyError(key) so missing-key lookups conform to the MutableMapping contract; locate the fixes in the ModeloptBaseConfig methods __getitem__, __setitem__ (and mirror the pattern used in __delitem__) to ensure consistency.
🧹 Nitpick comments (1)
examples/llm_ptq/example_utils.py (1)
849-855: ⚡ Quick winAlign
quant_cfgtype hints with the new Mapping-compatible behavior.
needs_checkpoint_path_updatenow supports anyMapping, but the signature still advertisesdict. Consider updating this signature (andresolve_checkpoint_dir) toMapping[str, Any]/MutableMapping[str, Any]so static typing matches runtime behavior.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/llm_ptq/example_utils.py` around lines 849 - 855, The type hints should reflect that quant_cfg can be any Mapping; update the signatures of needs_checkpoint_path_update and resolve_checkpoint_dir to accept Mapping[str, Any] (or MutableMapping[str, Any] if the function mutates the dict) instead of dict, and import the required typing symbols (Mapping, MutableMapping, Any) at the top of the module; keep the runtime logic unchanged but ensure the annotations match the Mapping-compatible behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@modelopt/torch/opt/config.py`:
- Around line 60-125: The mapping methods violate MutableMapping because
get_field_name_from_key() can raise AttributeError; wrap calls to
get_field_name_from_key() in __getitem__ and __setitem__ (and anywhere else that
should follow mapping semantics) to catch AttributeError and re-raise
KeyError(key) so missing-key lookups conform to the MutableMapping contract;
locate the fixes in the ModeloptBaseConfig methods __getitem__, __setitem__ (and
mirror the pattern used in __delitem__) to ensure consistency.
---
Nitpick comments:
In `@examples/llm_ptq/example_utils.py`:
- Around line 849-855: The type hints should reflect that quant_cfg can be any
Mapping; update the signatures of needs_checkpoint_path_update and
resolve_checkpoint_dir to accept Mapping[str, Any] (or MutableMapping[str, Any]
if the function mutates the dict) instead of dict, and import the required
typing symbols (Mapping, MutableMapping, Any) at the top of the module; keep the
runtime logic unchanged but ensure the annotations match the Mapping-compatible
behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: fe529dcf-649f-47fa-910a-b1be49110dc6
📒 Files selected for processing (9)
examples/llm_ptq/example_utils.pyexamples/vllm_serve/vllm_ptq_utils.pymodelopt/torch/opt/config.pymodelopt/torch/quantization/config.pymodelopt/torch/quantization/conversion.pymodelopt/torch/quantization/nn/modules/tensor_quantizer.pymodelopt/torch/quantization/utils/core_utils.pytests/unit/torch/quantization/test_config_validation.pytests/unit/torch/quantization/test_quantize_cpu.py
🚧 Files skipped from review as they are similar to previous changes (2)
- modelopt/torch/quantization/conversion.py
- modelopt/torch/quantization/config.py
Have load_config return Pydantic-normalized values when schema_type or modelopt-schema is present, including typed recipe metadata and quantization config entries. Update recipe loading, docs, and unit tests for typed config objects and normalized quant_cfg handling. Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Convert QuantizerCfgEntry into a ModeloptBaseConfig-backed Pydantic model with validation while preserving dict-style access for callers. Normalize schema-loaded quant_cfg snippets through model_dump, simplify quantizer cfg handling, and cover both dict and QuantizeConfig need_calibration inputs. Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Update normalize_quant_cfg_list to accept dict entries, typed entries, and legacy dict formats while returning QuantizerCfgEntry objects. Preserve already parsed entries, handle implicit enable values in consumers, and cover mixed typed/dict inputs in tests. Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Make ModeloptBaseConfig a MutableMapping and use Mapping/MutableMapping protocol checks for typed quantizer config entries and attributes. Convert predefined quantization recipes to QuantizeConfig objects while preserving dict-style callers and compatibility paths. Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Cover normalization after mutating raw dict quantizer entries and schema-backed ModeloptBaseConfig entries. Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
f9bc176 to
b0fadd1
Compare
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
b0fadd1 to
0917ab8
Compare
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
What does this PR do?
Type of change: new feature
This PR schematizes ModelOpt config loading and quantization config entries while keeping the existing YAML/dict config surface backward compatible.
ModeloptBaseConfigimplementMutableMapping, including__delitem__unset semantics formodel_dump(exclude_unset=True). This change will make us be able to use ModeloptBaseConfig and simple dict interchangeably. But we will formally use ModeloptBaseConfig, dict is just to make it backward compatible.QuantizerCfgEntryTypedDictwith aModeloptBaseConfig.load_configreturn Pydantic-normalized values when aschema_typeormodelopt-schemaannotation is present, with overloads documenting the typed return behavior.ModeloptBaseConfigPydantic model so recipe configs share the same schema/field behavior as other ModelOpt configs.QuantizeConfigobjects viaQuantizeConfig.model_validate(...).Mapping/MutableMappingprotocol checks for typed config entries and attributes while preserving dict-style access used by existing code.need_calibrationargument types.Usage
Existing YAML and dict-based quantization configs continue to work. Callers that want schema-backed objects can now request them directly:
Predefined Python configs are now schema-backed but still support mapping-style access:
Testing
python -m pytest tests/unit/recipe/test_loader.pypython -m pytest tests/unit/torch/quantization/test_config_validation.pypython -m pytest tests/unit/torch/quantization/test_autoquant.pypython -m pytest tests/unit/torch/utils/test_serialization.pyruff checkon the touched Python filesruff format --checkon the touched Python filespre-commit run mypy --files ...on the touched Python filesgit diff --checkBefore your PR is "Ready for review"
Make sure you read and follow Contributor guidelines and your commits are signed (
git commit -s -S).Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded
trust_remote_code=True,torch.load(..., weights_only=False),pickle, etc.).quant_cfgforms are still accepted and normalized; typed configs retain mapping-style access for existing call sites.CONTRIBUTING.md: N/AAdditional Information
No related issue. This PR intentionally does not change checked-in YAML configs; it updates the loader/schema layer and Python-side quantization config handling.
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests