Skip to content

Commit 41f38d6

Browse files
committed
update docstring
Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
1 parent aaf7c81 commit 41f38d6

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

modelopt/torch/quantization/model_calib.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ def max_calibrate(
106106
model: Model to be calibrated.
107107
forward_loop: A callable which takes the model as argument and
108108
forwards calibration data through the model.
109-
distributed_sync: Whether to sync amax across distributed processes.
109+
distributed_sync: Whether to sync input_quantizer amax across distributed processes.
110110
111111
See :class:`MaxCalibConfig <modelopt.torch.quantization.config.MaxCalibConfig>` for
112112
details on the remaining arguments.
@@ -118,7 +118,7 @@ def max_calibrate(
118118
forward_loop(model)
119119
finish_stats_collection(model)
120120

121-
# Sync amax across local experts within each rank (for SequentialMLP)
121+
# Sync input_quantizer amax across local experts within each rank (for SequentialMLP)
122122
for name, module in model.named_modules():
123123
if hasattr(module, "layer_sync_moe_local_experts_amax"):
124124
module.layer_sync_moe_local_experts_amax()

0 commit comments

Comments
 (0)