Skip to content

[WebGPU] Add gating logic for subgroup shuffle primitives#18823

Open
ksgr5566 wants to merge 4 commits intoapache:mainfrom
ksgr5566:webgpu-subgroup-test
Open

[WebGPU] Add gating logic for subgroup shuffle primitives#18823
ksgr5566 wants to merge 4 commits intoapache:mainfrom
ksgr5566:webgpu-subgroup-test

Conversation

@ksgr5566
Copy link

Summary

This adds gating logic on top of #17699 to support optional subgroup shuffle
primitives based on a compile-time flag.

Problem

The PR #17699 always generates subgroup shuffle ops when targeting WebGPU.
However, not all WebGPU devices support subgroups. We need a way to:

  • Default to shared memory reductions (universally compatible)
  • Optionally enable subgroup shuffles for devices that support them

Solution

Implement gating via TVM target parameter:

  • Default thread_warp_size=1 disables warp reductions (uses shared memory + barriers)
  • Add target parser UpdateWebGPUAttrs() that sets thread_warp_size=32 when supports_subgroups=true
  • Add --enable-subgroups CLI flag in mlc-llm to surface the option to users

The gating happens at the reduction path selection level (IsWarpReduction() in
lower_thread_allreduce.cc), ensuring subgroup ops are never generated unless explicitly enabled.

Changes

Testing

Tested with Llama-3.2-1B-q4f16_1. Baseline (no flag) uses shared memory reductions;
with flag, generates subgroupShuffle* ops.
Both the generated WGSLs here: https://gist.github.com/ksgr5566/301664a5dda3e46f44092be4d09b2d4f

Copilot AI review requested due to automatic review settings February 25, 2026 18:41
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ksgr5566, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances WebGPU support by introducing a flexible mechanism for utilizing subgroup shuffle primitives. Recognizing that not all WebGPU devices support subgroups, the changes ensure that shared memory reductions are used as a default, universally compatible approach. For devices that do support subgroups, a new target parameter (supports_subgroups) and an associated CLI flag (--enable-subgroups in MLC-LLM) allow for their optional activation, leading to potential performance improvements. This gating logic is applied at the reduction path selection level, preventing the generation of unsupported subgroup operations.

Highlights

  • Conditional Subgroup Shuffles: Implemented gating logic to enable WebGPU subgroup shuffle primitives only when explicitly supported by the target device.
  • Default to Shared Memory: Ensured that shared memory reductions are used by default for universal compatibility across WebGPU devices.
  • Target Parameter and CLI Flag: Introduced a supports_subgroups target parameter and a corresponding thread_warp_size adjustment, along with a --enable-subgroups CLI flag in MLC-LLM, to allow users to opt-in to subgroup optimizations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/target/source/codegen_webgpu.cc
    • Enabled the subgroups WGSL extension in the generated header.
    • Initialized the enable_subgroups_ flag based on target attributes.
  • src/target/source/codegen_webgpu.h
    • Added a boolean member enable_subgroups_ to track subgroup support.
  • src/target/source/intrin_rule_webgpu.cc
    • Defined WebGPU-specific intrinsic rules to map generic tvm_warp_shuffle operations to tir.webgpu.subgroup_shuffle built-ins.
  • src/target/target_kind.cc
    • Implemented UpdateWebGPUAttrs to dynamically set thread_warp_size to 32 when supports_subgroups is true.
    • Registered new target attributes for WebGPU, including supports_subgroups and thread_warp_size.
  • src/tir/transforms/lower_thread_allreduce.cc
    • Modified the WarpShuffle function to conditionally cast the delta argument to uint32 for WebGPU.
    • Extended IsWarpReduction to include WebGPU as a target for warp reductions, while disabling the warp shuffle mask for it.
  • web/package-lock.json
    • Updated various JavaScript package dependencies, including @babel components and semver.
  • web/src/webgpu.ts
    • Added logic to detect and request the subgroups feature when initializing a WebGPU adapter.
Activity
  • No specific activity was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for subgroup shuffle primitives in WebGPU, gated by a supports_subgroups target flag. The changes are well-structured and follow the logic described. The implementation correctly enables subgroup operations when the feature is available and requested, and falls back to shared memory reductions otherwise.

I've added a couple of suggestions to improve the code. One is to refactor the target checks in lower_thread_allreduce.cc for better maintainability. The other is to make the thread_warp_size setting more flexible in target_kind.cc to allow user overrides. Overall, this is a great addition for WebGPU performance.

Comment on lines 435 to 437
if (subgroups) {
target.Set("thread_warp_size", int64_t(32));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This implementation unconditionally sets thread_warp_size to 32 if supports_subgroups is true. This will overwrite any user-provided value for thread_warp_size. It would be more flexible to only set this value if the user has not already specified one, allowing for overrides on devices that may have different subgroup sizes.

Suggested change
if (subgroups) {
target.Set("thread_warp_size", int64_t(32));
}
if (subgroups && !target.count("thread_warp_size")) {
target.Set("thread_warp_size", int64_t(32));
}

Comment on lines 724 to +729
if ((target_->kind->name != "cuda") && (target_->kind->name != "rocm") &&
(target_->kind->name != "metal")) {
(target_->kind->name != "metal") && (target_->kind->name != "webgpu")) {
return false;
}

need_warp_shuffle_mask_ = target_->kind->name != "metal";
need_warp_shuffle_mask_ = target_->kind->name != "metal" && target_->kind->name != "webgpu";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve maintainability, consider using std::unordered_set for checking the target kind. This makes it easier to add or remove supported targets in the future.

Suggested change
if ((target_->kind->name != "cuda") && (target_->kind->name != "rocm") &&
(target_->kind->name != "metal")) {
(target_->kind->name != "metal") && (target_->kind->name != "webgpu")) {
return false;
}
need_warp_shuffle_mask_ = target_->kind->name != "metal";
need_warp_shuffle_mask_ = target_->kind->name != "metal" && target_->kind->name != "webgpu";
const std::unordered_set<std::string> supported_targets = {"cuda", "rocm", "metal", "webgpu"};
if (!supported_targets.count(target_->kind->name)) {
return false;
}
const std::unordered_set<std::string> no_mask_targets = {"metal", "webgpu"};
need_warp_shuffle_mask_ = !no_mask_targets.count(target_->kind->name);

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds compile-time gating for WebGPU subgroup shuffle generation by introducing a supports_subgroups target attribute and wiring it through TIR lowering, intrinsic lowering, and WGSL codegen so subgroup ops are only emitted when explicitly enabled.

Changes:

  • Add WebGPU target parsing/attrs to default thread_warp_size=1 and set thread_warp_size=32 when supports_subgroups=true.
  • Extend LowerThreadAllreduce and WebGPU intrinsic rules to allow warp shuffle-based reductions on WebGPU when enabled.
  • Emit enable subgroups; in WGSL when supports_subgroups is enabled; request the subgroups device feature in the web runtime when available.

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
web/src/webgpu.ts Requests subgroups feature when the adapter advertises support.
web/package-lock.json Updates locked JS dependencies.
src/tir/transforms/lower_thread_allreduce.cc Enables warp reduction path for WebGPU and adjusts shuffle lowering behavior.
src/target/target_kind.cc Adds supports_subgroups + WebGPU target parser to set thread_warp_size.
src/target/source/intrin_rule_webgpu.cc Lowers tvm_warp_shuffle* to WGSL subgroupShuffle* intrinsics.
src/target/source/codegen_webgpu.h Adds enable_subgroups_ state to the WebGPU codegen.
src/target/source/codegen_webgpu.cc Emits enable subgroups; directive based on target attrs.
Files not reviewed (1)
  • web/package-lock.json: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 37 to 55
const Op operator()(DataType t, const Op& orig_op) const {
if (orig_op.same_as(builtin::tvm_warp_shuffle())) {
return Op::Get("tir.webgpu.subgroup_shuffle");
} else if (orig_op.same_as(builtin::tvm_warp_shuffle_up())) {
return Op::Get("tir.webgpu.subgroup_shuffle_up");
} else {
ICHECK(orig_op.same_as(builtin::tvm_warp_shuffle_down()));
return Op::Get("tir.webgpu.subgroup_shuffle_down");
}
}
};

template <typename T>
static PrimExpr DispatchWebGPUShuffle(const PrimExpr& e) {
const CallNode* call = e.as<CallNode>();
ICHECK(call != nullptr);
ICHECK_EQ(call->args.size(), 5); // mask, value, warp_id, width, warp_size
ffi::Array<PrimExpr> webgpu_args{{call->args[1], call->args[2]}};
return Call(call->dtype, T()(call->dtype, Downcast<Op>(call->op)), webgpu_args);
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newly added WebGPU warp shuffle lowering block uses an indentation style that differs from the rest of this file (4-space indentation vs the surrounding 2-space style). Please reformat these lines to match the existing file formatting for consistency.

Suggested change
const Op operator()(DataType t, const Op& orig_op) const {
if (orig_op.same_as(builtin::tvm_warp_shuffle())) {
return Op::Get("tir.webgpu.subgroup_shuffle");
} else if (orig_op.same_as(builtin::tvm_warp_shuffle_up())) {
return Op::Get("tir.webgpu.subgroup_shuffle_up");
} else {
ICHECK(orig_op.same_as(builtin::tvm_warp_shuffle_down()));
return Op::Get("tir.webgpu.subgroup_shuffle_down");
}
}
};
template <typename T>
static PrimExpr DispatchWebGPUShuffle(const PrimExpr& e) {
const CallNode* call = e.as<CallNode>();
ICHECK(call != nullptr);
ICHECK_EQ(call->args.size(), 5); // mask, value, warp_id, width, warp_size
ffi::Array<PrimExpr> webgpu_args{{call->args[1], call->args[2]}};
return Call(call->dtype, T()(call->dtype, Downcast<Op>(call->op)), webgpu_args);
const Op operator()(DataType t, const Op& orig_op) const {
if (orig_op.same_as(builtin::tvm_warp_shuffle())) {
return Op::Get("tir.webgpu.subgroup_shuffle");
} else if (orig_op.same_as(builtin::tvm_warp_shuffle_up())) {
return Op::Get("tir.webgpu.subgroup_shuffle_up");
} else {
ICHECK(orig_op.same_as(builtin::tvm_warp_shuffle_down()));
return Op::Get("tir.webgpu.subgroup_shuffle_down");
}
}
};
template <typename T>
static PrimExpr DispatchWebGPUShuffle(const PrimExpr& e) {
const CallNode* call = e.as<CallNode>();
ICHECK(call != nullptr);
ICHECK_EQ(call->args.size(), 5); // mask, value, warp_id, width, warp_size
ffi::Array<PrimExpr> webgpu_args{{call->args[1], call->args[2]}};
return Call(call->dtype, T()(call->dtype, Downcast<Op>(call->op)), webgpu_args);

Copilot uses AI. Check for mistakes.
Comment on lines 442 to 448
TVM_REGISTER_TARGET_KIND("webgpu", kDLWebGPU)
.add_attr_option<int64_t>("max_num_threads", 256)
.add_attr_option<bool>("supports_subgroups", false)
// thread_warp_size=1: is_subwarp_reduction and is_multiwarp_reduction returns false, so no subgroup ops are emitted.
.add_attr_option<int64_t>("thread_warp_size", 1)
.set_target_parser(UpdateWebGPUAttrs)
.set_default_keys({"webgpu", "gpu"});
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces new WebGPU target attributes (supports_subgroups) and a target parser side effect that mutates thread_warp_size. There are existing Python tests covering target parsing and LowerThreadAllreduce behavior, but none validating the new WebGPU defaults/gating. Please add a unit test to assert (1) Target('webgpu') defaults thread_warp_size to 1, and (2) Target({'kind':'webgpu','supports_subgroups': True}) results in thread_warp_size==32 (and ideally that subgroup shuffles are only emitted in the latter case).

Copilot uses AI. Check for mistakes.
Comment on lines +511 to +513
bool cast_offset_to_uint = target_->kind->name == "webgpu";
PrimExpr other = WarpShuffle(builtin::tvm_warp_shuffle_down(), mask_buffer, val, offset,
cast_offset_to_uint);
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For WebGPU, WGSL subgroupShuffle* builtins require the lane/delta argument to be u32, but this change only casts the shuffle_down offset. Other call sites (e.g. the tvm_warp_shuffle used for broadcasting the lane-0 result) can still pass an i32 lane id, which will codegen to subgroupShuffle(x, <i32>) and fail WGSL type-checking. Consider ensuring all WebGPU warp shuffle lane/delta values are cast to u32 (either by making WebGPU always cast in WarpShuffle, or by handling the cast in the WebGPU intrinsic lowering).

Copilot uses AI. Check for mistakes.
const CallNode* call = e.as<CallNode>();
ICHECK(call != nullptr);
ICHECK_EQ(call->args.size(), 5); // mask, value, warp_id, width, warp_size
ffi::Array<PrimExpr> webgpu_args{{call->args[1], call->args[2]}};
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DispatchWebGPUShuffle forwards call->args[2] directly as the lane/delta argument, but WGSL subgroupShuffle/subgroupShuffleUp/subgroupShuffleDown expect that parameter to be u32. Since WebGPU thread indices are typically u32 cast to i32 in codegen, the delta/lane often ends up as i32, producing invalid WGSL. Please cast call->args[2] to u32 (or otherwise enforce u32) in this dispatcher so all shuffle variants/call sites are handled consistently.

Suggested change
ffi::Array<PrimExpr> webgpu_args{{call->args[1], call->args[2]}};
PrimExpr lane = Cast(DataType::UInt(32), call->args[2]);
ffi::Array<PrimExpr> webgpu_args{{call->args[1], lane}};

Copilot uses AI. Check for mistakes.

CodeGenWebGPU::CodeGenWebGPU(Target target) : target_(target) {}
CodeGenWebGPU::CodeGenWebGPU(Target target) : target_(target) {
enable_subgroups_ = target_->GetAttr<Bool>("supports_subgroups").value_or(Bool(false));
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enable subgroups; is currently controlled only by the supports_subgroups target attr, but subgroup shuffle ops can be emitted whenever thread_warp_size is set > 1 (e.g. if a user sets -thread-warp-size=32 directly on the WebGPU target). In that case, the generated WGSL would contain subgroupShuffle* calls without the required enable subgroups; directive. To avoid this inconsistent state, consider deriving enable_subgroups_ from thread_warp_size > 1 as well, or emitting a clear error if subgroup ops are encountered while supports_subgroups is false.

Suggested change
enable_subgroups_ = target_->GetAttr<Bool>("supports_subgroups").value_or(Bool(false));
Bool supports_subgroups = target_->GetAttr<Bool>("supports_subgroups").value_or(Bool(false));
Optional<Integer> thread_warp_size = target_->GetAttr<Integer>("thread_warp_size");
bool warp_uses_subgroups =
thread_warp_size.defined() && thread_warp_size.value()->value > 1;
if (warp_uses_subgroups && !supports_subgroups) {
LOG(FATAL) << "WebGPU target has thread_warp_size=" << thread_warp_size.value()->value
<< " but does not support subgroups. Either enable the 'supports_subgroups' "
<< "target attribute or set thread_warp_size <= 1.";
}
enable_subgroups_ = supports_subgroups || warp_uses_subgroups;

Copilot uses AI. Check for mistakes.
@ksgr5566 ksgr5566 force-pushed the webgpu-subgroup-test branch 2 times, most recently from ea41337 to 2cba8b3 Compare February 25, 2026 19:02
@ksgr5566 ksgr5566 force-pushed the webgpu-subgroup-test branch from 2cba8b3 to f119bbd Compare February 25, 2026 19:08
// The former may cause dead lock as there is a divergent
// branch with a warp sync call inside.
PrimExpr other = WarpShuffle(builtin::tvm_warp_shuffle_down(), mask_buffer, val, offset);
bool cast_offset_to_uint = target_->kind->name == "webgpu";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For changes in src/s_tir/transform/lower_thread_allreduce.cc, could you please add a unit test in tests/python/tir-transform/test_tir_transform_lower_thread_all_reduce.py to test the expected behavior?

@MasterJH5574
Copy link
Contributor

Also there are failing tests. Please take a look and resolve them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants