Update README.md
Browse files
README.md
CHANGED
|
@@ -20,4 +20,62 @@ The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/Co
|
|
| 20 |
|
| 21 |
Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions.
|
| 22 |
|
| 23 |
-
Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions.
|
| 22 |
|
| 23 |
+
Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types.
|
| 24 |
+
|
| 25 |
+
## Diffusers support
|
| 26 |
+
|
| 27 |
+
You can also use the checkpoints with the `diffusers` library.
|
| 28 |
+
|
| 29 |
+
Make sure to install `diffusers` from source:
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
pip install git+https://github.com/huggingface/diffusers
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
And then install `gguf`:
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
pip install -U gguf
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
And then we're ready to perform inference:
|
| 42 |
+
|
| 43 |
+
<details>
|
| 44 |
+
<summary>Inference code</summary>
|
| 45 |
+
|
| 46 |
+
```py
|
| 47 |
+
import torch
|
| 48 |
+
from diffusers.utils import export_to_video
|
| 49 |
+
from diffusers import LTXPipeline, LTXVideoTransformer3DModel, GGUFQuantizationConfig
|
| 50 |
+
|
| 51 |
+
ckpt_path = (
|
| 52 |
+
"https://huggingface.co/city96/LTX-Video-gguf/blob/main/ltx-video-2b-v0.9-Q3_K_S.gguf"
|
| 53 |
+
)
|
| 54 |
+
transformer = LTXVideoTransformer3DModel.from_single_file(
|
| 55 |
+
ckpt_path,
|
| 56 |
+
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
|
| 57 |
+
torch_dtype=torch.bfloat16,
|
| 58 |
+
)
|
| 59 |
+
pipe = LTXPipeline.from_pretrained(
|
| 60 |
+
"Lightricks/LTX-Video",
|
| 61 |
+
transformer=transformer,
|
| 62 |
+
generator=torch.manual_seed(0),
|
| 63 |
+
torch_dtype=torch.bfloat16,
|
| 64 |
+
)
|
| 65 |
+
pipe.enable_model_cpu_offload()
|
| 66 |
+
|
| 67 |
+
prompt = "A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage"
|
| 68 |
+
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
|
| 69 |
+
|
| 70 |
+
video = pipe(
|
| 71 |
+
prompt=prompt,
|
| 72 |
+
negative_prompt=negative_prompt,
|
| 73 |
+
width=704,
|
| 74 |
+
height=480,
|
| 75 |
+
num_frames=161,
|
| 76 |
+
num_inference_steps=50,
|
| 77 |
+
).frames[0]
|
| 78 |
+
export_to_video(video, "output_gguf_ltx.mp4", fps=24)
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
</details>
|