Conversation
b261d14 to
79ea2e7
Compare
| # get_symmetric_quantization_config, | ||
| # XNNPACKQuantizer, | ||
| # ) | ||
| # but unsure if we can depend on ET here, so I'll use torchao testing for now |
There was a problem hiding this comment.
This is just a unit test, so should be totally fine to depend on executorch.
There was a problem hiding this comment.
sg, I'll change to ET, how do I add executorch as a dependency for CI test?
There was a problem hiding this comment.
I expected it to already be a dependency, but looks like it's only a dependency when frontend is set to EXECUTORCH:
Line 231 in cf439be
Otherwise it's not included as dependency.
So I guess we really don't want to use executorch here if we can at all avoid it. Sorry for the confusion.
There was a problem hiding this comment.
oh OK, I can change it back for now, I feel ideally we don't import from torchao testing though. longer term plan is to remove CoreMLQuantizer here right? I heard @metascroy is going to duplicate the quantizer in executorch first
| check if any of the node is annotated, return True if any of the node | ||
| is annotated, otherwise return False | ||
| """ | ||
| annotated = False |
There was a problem hiding this comment.
I would probably do:
for node in nodes:
if annotated or (
"quantization_annotation" in node.meta
and node.meta["quantization_annotation"]._annotated
):
return True
return False|
I've kicked off a CI just to see where we're at with that: cc: @YifanShenSZ |
e19a0f9 to
85972a4
Compare
| from torchao.quantization.pt2e.quantizer import ( | ||
| QuantizationSpec as _TorchQuantizationSpec, | ||
| ) | ||
|
|
There was a problem hiding this comment.
@jerryzh168 I think the import:
from coremltools.optimize.torch.quantization.quantization_config import ObserverType as _ObserverType
needs adjusting as well? _ObserverType will construct the observers from torch.ao.
There was a problem hiding this comment.
this seems OK for now, we plan to deprecate these things in torchao.pt2e as well
Edit:
oh you're right, I just realized that these older observers won't work with torchao pt2e quant flow now
Summary: We just removed pt2e quant from pytorch/pytorch in pytorch/pytorch#169151 This PR updated the pt2e quantization callsites to torchao Test Plan: Reviewers: Subscribers: Tasks: Tags:
85972a4 to
aa34a55
Compare
| @pytest.mark.parametrize("is_qat", [True, False]) | ||
| @pytest.mark.skipif(not _HAS_TORCH_EXPORT_API or _TORCH_VERSION < _EXPECTED_TORCH_VERSION, | ||
| reason="This test requires PyTorch Export APIs and PyTorch >= 2.2.0.") | ||
| @pytest.mark.skip(reason="Skipping for now due to migration of pt2e quant API to torchao, please use CoreMLQuantizer from ExecuTorch: https://github.com/pytorch/executorch/pull/16473") |
There was a problem hiding this comment.
can't make all tests work due to incompatible observer/fake_quant, I can consider just abandon the PR since we'll be using CoreMLQuantizer from ET (pytorch/executorch#16473)?
|
we are using CoreMLQuantizer from ExecuTorch, seems this is not needed anymore, we can put up PR to remove the quantizer here in the future |
Summary:
We just removed pt2e quant from pytorch/pytorch in pytorch/pytorch#169151 This PR updated the pt2e quantization callsites to torchao
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: