Skip to content

Comments

Add support for export ComfyUI compatible checkpoint for diffusion model(e.g., LTX-2)#911

Open
ynankani wants to merge 2 commits intomainfrom
ynankani/ltx2_comfyui_checkpoint
Open

Add support for export ComfyUI compatible checkpoint for diffusion model(e.g., LTX-2)#911
ynankani wants to merge 2 commits intomainfrom
ynankani/ltx2_comfyui_checkpoint

Conversation

@ynankani
Copy link
Contributor

What does this PR do

Add support for export ComfyUI compatible checkpoint for diffusion model(e.g., LTX-2)

Type of change:

Overview:
Add support for export ComfyUI compatible checkpoint for diffusion model(e.g., LTX-2)

  1. Added a a parameter for merging the base vae, vocoder, connectors in the quantized checkpoint
  2. storing quantization metadata and export tool as modelopt , required for ComfyUI compatibility.
  3. Internally updating the transformer block prefixes to match the expectation of ComfyUI

Usage

    export_hf_checkpoint(
        pipeline,
        export_dir=EXPORT_DIR,
        merged_base_safetensor_path=BASE_CKPT,  # merge VAE/vocoder from base
    )

Testing

  1. Tested with ltx-2 model
    a) initializing a twoStagePipeline object
    b) calling mtq.quantize on transformer with NVFP4_DEFAULT_CFG
    c) then exporting with export_hf_checkpoint passing the param merged_base_safetensor_path to generate merged
    checkpoint
  2. Ran the generated checkpoint with step1 on ComfyUI to validate
  3. Ran step1 without merged_base_safetensor_path to check backward compatibility.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: NA
  • Did you add or update any necessary documentation?: NA
  • Did you update Changelog?: NA

Additional Information

…del(e.g., LTX-2)

Signed-off-by: ynankani <ynankani@nvidia.com>
…del(e.g., LTX-2)

Signed-off-by: ynankani <ynankani@nvidia.com>
@ynankani ynankani requested a review from a team as a code owner February 20, 2026 14:40
@ynankani ynankani requested a review from Edwardf0t1 February 20, 2026 14:40
@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.10%. Comparing base (7c4c9fd) to head (69107c0).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #911   +/-   ##
=======================================
  Coverage   73.10%   73.10%           
=======================================
  Files         205      205           
  Lines       22281    22281           
=======================================
  Hits        16288    16288           
  Misses       5993     5993           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@jingyu-ml jingyu-ml left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments, overall it looks good to me.

return False


def _merge_diffusion_transformer_with_non_transformer_components(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, this seems to work only for LTX2.

Are these mapping relationships hard-coded? If so, we should move this logic into a model-dependent function, for example:

model_type = LTX2
merge_function[LTX2](...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metadata_full: dict[str, str] = {}
if merged_base_safetensor_path is not None:
cpu_state_dict, metadata_full = (
_merge_diffusion_transformer_with_non_transformer_components(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As noted above, this function should be model-dependent

merge_function[model_type](...)

metadata["_export_format"] = "safetensors_state_dict"
metadata["_class_name"] = type(component).__name__

if hf_quant_config is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should add more checks to make it more safer
if hf_quant_config is not None and merged_base_safetensor_path is not None:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants