File size: 1,550 Bytes
9ff7ada
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
tags:
  - ltx-2
  - ltx-video
  - text-to-video
  - audio-video
pinned: true
language:
  - en
license: other
pipeline_tag: text-to-video
library_name: diffusers
---

# scoobyltx

This is a fine-tuned version of [`ltx-2-19b-dev.safetensors`](/workspace/models/ltx2/ltx-2-19b-dev.safetensors) trained on custom data.

## Model Details

- **Base Model:** [`ltx-2-19b-dev.safetensors`](/workspace/models/ltx2/ltx-2-19b-dev.safetensors)
- **Training Type:** LoRA fine-tuning
- **Training Steps:** 8000
- **Learning Rate:** 0.0001
- **Batch Size:** 1

## Sample Outputs

| | | | |
|:---:|:---:|:---:|:---:|


## Usage

This model is designed to be used with the LTX-2 (Lightricks Audio-Video) pipeline.

### 🔌 Using Trained LoRAs in ComfyUI

In order to use the trained LoRA in ComfyUI, follow these steps:

1. Copy your trained LoRA checkpoint (`.safetensors` file) to the `models/loras` folder in your ComfyUI installation.
2. In your ComfyUI workflow:
    - Add the "Load LoRA" node to choose your LoRA file
    - Connect it to the "Load Checkpoint" node to apply the LoRA to the base model

You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the
official [LTX-2 repository](https://github.com/Lightricks/LTX-2).

### Example Prompts




This model inherits the license of the base model ([`ltx-2-19b-dev.safetensors`](/workspace/models/ltx2/ltx-2-19b-dev.safetensors)).

## Acknowledgments

- Base model: [Lightricks](https://huggingface.co/Lightricks/LTX-2)
- Trainer: [LTX-2](https://github.com/Lightricks/LTX-2)