feat: add upsample_bicubic2d base#512
Open
voltjia wants to merge 1 commit into
Open
Conversation
636dfd8 to
bd39d20
Compare
upsample_bicubic2d base
wooway777
approved these changes
May 8, 2026
wooway777
requested changes
May 8, 2026
wooway777
left a comment
There was a problem hiding this comment.
https://docs.pytorch.org/docs/2.11/generated/torch.nn.Upsample.html
这真不敢approve,bicubic是啥啊
dc3b3b0 to
156e83f
Compare
a411248 to
42b8d7b
Compare
42b8d7b to
fe774bf
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
upsample_bicubic2dinsrc/base/upsample_bicubic2d.h.src/base/upsample_bicubic2d.hinstead of emittinggenerated/base/upsample_bicubic2d.h.scripts/check_conventions.py.Motivation
This PR is part of the
feat/torch-codegenbase-header migration. The generatedUpsampleBicubic2dbase declaration is moved intosrc/baseso code generation can reuse a reviewed hand-written header.N/A: no linked issue.
Type of Change
feat- new feature / new operator / new platformfix- bug fixperf- performance improvement (no behavioral change)refactor- code restructuring without behavior changetest- adding or fixing tests onlydocs- documentation onlybuild/ci- build system or CI configurationchore- tooling, formatting, or other non-code changes!in the Conventional Commits prefix or aBREAKING CHANGE:footer)Platforms Affected
WITH_CPU)WITH_NVIDIA)WITH_ILUVATAR)WITH_METAX)WITH_CAMBRICON)WITH_MOORE)WITH_ASCEND)WITH_TORCH)Test Results on Supported Platforms
pytestResultmasterfeat/torch-codegenbase-header PR; no runtime implementation is added.masterfeat/torch-codegenbase-header PR; no runtime implementation is added.masterfeat/torch-codegenbase-header PR; no runtime implementation is added.masterfeat/torch-codegenbase-header PR; no runtime implementation is added.masterfeat/torch-codegenbase-header PR; no runtime implementation is added.masterfeat/torch-codegenbase-header PR; no runtime implementation is added.Full `pytest` output (optional)
Benchmark / Performance Impact
N/A. This PR only adds a base operator declaration for torch codegen reuse and does not add a runtime implementation.
Notes for Reviewers
feat/torch-codegen, notmaster.feat/torch-codegencontains onlysrc/base/upsample_bicubic2d.h.clang-format21 passing onsrc/base/upsample_bicubic2d.h; the follow-up formatting commit applies the class member spacing required byscripts/check_conventions.py.Checklist
Title, Branch, and Commits
feat(nvidia): …,fix(cuda/gemm): …).codex/add-upsample_bicubic2d-basePR branches targetingfeat/torch-codegen; branch renaming is intentionally out of scope.feat/torch-codegen, notmaster; nomasterrebase is required for this integration target.fixup!/squash!/wipcommits remain.Scope and Design
CONTRIBUTING.md§Code/General).printf/std::cout/print(...)left behind, orTODOwithout an owner and issue link.UpsampleBicubic2dbase operator declaration used by torch codegen.General Code Hygiene (applies to all languages)
CONTRIBUTING.md§Code/General).CONTRIBUTING.md§Code/General).the `seqlens_k` tensor) (CONTRIBUTING.md§Code/General).CONTRIBUTING.md§Code/General).CONTRIBUTING.md§Code/General; §Python).C++ Specific (if C++ files changed)
clang-format(version 21, per.github/workflows/clang-format.yml) has been run against all modified.h,.cc,.cuh, and.mlufiles; the diff is clean.clang-tidywas not run because this PR only adds a base declaration header forfeat/torch-codegen; no runtime implementation is added.CONTRIBUTING.md§C++).CONTRIBUTING.md§C++).CONTRIBUTING.md§C++).CONTRIBUTING.md§C++).CONTRIBUTING.md§C++).src/base/upsample_bicubic2d.hfor torch codegen reuse; platform implementations are out of scope.new/delete; RAII / smart pointers / existing allocators are used.Python Specific (if Python files changed)
N/A: no Python files changed.
Testing
pytestwas intentionally not run because this PR targetsfeat/torch-codegen, notmaster, and only adds a reusable base header declaration.tests/coverage is required.Payload-returning test was added.dtype/deviceparameterization was added.Build, CI, and Tooling
feat/torch-codegen, notmaster, and only adds a reusable base header declaration.compile_commands.jsonbehavior was not changed.clang-format21 passing onsrc/base/upsample_bicubic2d.h.Documentation
README.md,CONTRIBUTING.md, and developer workflow are unchanged.UpsampleBicubic2dis an internal base declaration for torch codegen reuse; no user-facing documentation is required.Security and Safety