Skip to content

Mlir Backend: Tensor Dialect Support#45

Draft
liamsemeria wants to merge 16 commits intoxtc-tools:mainfrom
liamsemeria:dev/sliam/mlir-tensor-dialect
Draft

Mlir Backend: Tensor Dialect Support#45
liamsemeria wants to merge 16 commits intoxtc-tools:mainfrom
liamsemeria:dev/sliam/mlir-tensor-dialect

Conversation

@liamsemeria
Copy link
Copy Markdown
Contributor

@liamsemeria liamsemeria commented Feb 10, 2026

Motivation

Support for ops in the tensor dialect allows for tracking of producer-consumer relationships and broadcasting, which allow for operator fusion and element-wise operations respectively.

Description

The mlir backend now has an option use_tensor_dialect that causes ops to be generated in the tensor dialect. The tensor dialect gets lowered into memref by a new bufferization pass after the transform pass is applied (can be printed with print_bufferization_ir=True).

Discussion

How the Tensor Dialect Affects the IR:

  • matmul and conv2d:
    The bufferization results in the exact same lowered mlir as the memref dialect ops.

  • relu:
    Collapsing the shape to 1 dim requires the tensor to be expanded (unlike the memref), resulting in an extra memory allocation after bufferization. So the relu for the tensor dialect is non-collapsing, which is also required for fusion to work properly.

  • pad and unpad:
    The tensor implementation uses a linalg.generic which is needed for fusion. It has dynamic dims which requires mlir: updated extra-tools version #70 an update to the extra tools for the c backend to work properly.

@liamsemeria liamsemeria force-pushed the dev/sliam/mlir-tensor-dialect branch from aefd354 to cbb5303 Compare February 10, 2026 11:36
@liamsemeria liamsemeria force-pushed the dev/sliam/mlir-tensor-dialect branch from 9e3ab2d to 855d715 Compare February 11, 2026 14:03
@liamsemeria liamsemeria force-pushed the dev/sliam/mlir-tensor-dialect branch from 855d715 to 58ffe40 Compare February 11, 2026 14:08
@qaco
Copy link
Copy Markdown
Contributor

qaco commented Feb 26, 2026

It is really cool ! Congrats !

When you say that you wait for an xDSL release, is it because you know that the feature will be added ?

@liamsemeria
Copy link
Copy Markdown
Contributor Author

It is really cool ! Congrats !

When you say that you wait for an xDSL release, is it because you know that the feature will be added ?

Thanks! yeah I contributed tensor.pad but its probably easiest to just wait for the next release to add it to xtc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants