Skip to content

[tx] Simplify LoRAMixin for linear and embedding case#983

Merged
pcmoritz merged 2 commits intoNovaSky-AI:mainfrom
pcmoritz:tx-cleanup-lora-mixin
Jan 28, 2026
Merged

[tx] Simplify LoRAMixin for linear and embedding case#983
pcmoritz merged 2 commits intoNovaSky-AI:mainfrom
pcmoritz:tx-cleanup-lora-mixin

Conversation

@pcmoritz
Copy link
Collaborator

@pcmoritz pcmoritz commented Jan 28, 2026

We simplify the LoRAMixin by disentangling the linear and embedding case and handling them in LoRAMixin and LoRAEmbed respectively. This is in preparation for supporting LoRA for the flattened layout (tokens, *dims) in addition to (batch, seq_len, *dims) and also #969

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the LoRAMixin to better separate the logic for linear and embedding layers. The introduction of the _compute_lora_intermediate method, which is overridden in LoRAEmbed, is a good use of polymorphism and simplifies the apply_lora method by removing conditional logic. My review includes a couple of minor suggestions to improve code clarity by marking unused function parameters, which is a good practice for maintainability.

@pcmoritz pcmoritz merged commit 8048778 into NovaSky-AI:main Jan 28, 2026
4 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments