Skip to content

[ET Device Support] Schema changes: device info on Tensor and buffer-level device array#17533

Open
Gasoonjia wants to merge 4 commits intogh/gasoonjia/122/basefrom
gh/gasoonjia/122/head
Open

[ET Device Support] Schema changes: device info on Tensor and buffer-level device array#17533
Gasoonjia wants to merge 4 commits intogh/gasoonjia/122/basefrom
gh/gasoonjia/122/head

Conversation

@Gasoonjia
Copy link
Contributor

@Gasoonjia Gasoonjia commented Feb 18, 2026

…level device array

This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates.

This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement.

Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu

Differential Revision: [D93635657](https://our.internmc.facebook.com/intern/diff/D93635657/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 18, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17533

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 1 Cancelled Job

As of commit cf46ff9 with merge base 5db6f22 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 18, 2026
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…and buffer-level device array"

This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates.

This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement.

Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu

Differential Revision: [D93635657](https://our.internmc.facebook.com/intern/diff/D93635657/)

[ghstack-poisoned]
device_type: DeviceType = DeviceType.CPU
# Device index for multi-device scenarios (e.g., cuda:0, cuda:1).
# A value of -1 indicates the default device.
device_index: int = -1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: is it worth adding a 'DeviceInfo' dataclass/flatbuffers table if we may expect more device-related data?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if ok i'd like to keep the current structure. It only has two attributes and there will be no other places using device_info.

Copy link
Contributor

@JacobSzwejbka JacobSzwejbka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

…and buffer-level device array"

This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates.

This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement.

Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu

Differential Revision: [D93635657](https://our.internmc.facebook.com/intern/diff/D93635657/)

[ghstack-poisoned]
Copy link
Contributor

@JacobSzwejbka JacobSzwejbka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

…and buffer-level device array"

This diff adds device placement information to the ExecuTorch schema to support representing tensor-level device type information, which will be the basic requirement for the following tensor_parser updates.

This is part of the Phase 1 implementation to make ET device type work E2E without user-specified device placement.

Design doc: https://docs.google.com/document/d/1lwd9BlohmwkN5EEvRulO_b-XnZBwv1nMb5l2K3jfuwA/edit?tab=t.0#heading=h.o6anuvkix4bu

Differential Revision: [D93635657](https://our.internmc.facebook.com/intern/diff/D93635657/)

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants