Skip to content

LoRA trained on base model vs distilled model #175

@sarlea1

Description

@sarlea1

Hi, I have a few questions about the relationship between the base model and the distilled model, specifically around
LoRA training and inference compatibility.

Background

From reading the code, I understand there are two separate architectures:
* DistilledPipeline: uses ltx-2.3-22b-distilled.safetensors directly as a standalone checkpoint, running 8-step
inference via DISTILLED_SIGMA_VALUES
* TI2VidTwoStagesPipeline: uses the base model for Stage 1 (full CFG, ~30 steps), then applies a separate
distilled_lora on top of the base model for Stage 2 super-resolution refinement

Questions
1. Is ltx-2.3-22b-distilled.safetensors fine-tuned from the base model, or independently trained? This affects
whether a LoRA trained on the base model can be transferred to the distilled model.
2. If I train a LoRA on the base model and load it into DistilledPipeline, is the LoRA expected to work? Given that
DistilledPipeline uses 8-step distilled sigma values ([1.0, 0.99375, ..., 0.0]), but the LoRA was trained with the
standard full-range flow matching objective, I'm concerned about the mismatch.
3. What is the recommended workflow for training a custom LoRA that can be used with DistilledPipeline for fast
8-step inference? Should I:
* Train on the distilled model directly (using the standard flow matching training objective, not the distilled
sigma schedule), or
* Train on the base model and hope the weights are close enough for transfer?
4. The trainer does not seem to have any distillation-aware training mode. Is there a plan to support training
LoRAs specifically aligned with the distilled sigma schedule?

Any guidance on the intended workflow here would be greatly appreciated. Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions