Instructions to use yitongl/sparse_quant_exp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use yitongl/sparse_quant_exp with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("yitongl/sparse_quant_exp", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
Document backend snapshot for sfp4 checkpoint-750
Browse files
README.md
CHANGED
|
@@ -9,6 +9,7 @@ Contents:
|
|
| 9 |
|
| 10 |
- `transformer/config.json`
|
| 11 |
- `transformer/diffusion_pytorch_model.safetensors`
|
|
|
|
| 12 |
|
| 13 |
Training run:
|
| 14 |
|
|
@@ -21,3 +22,8 @@ Training run:
|
|
| 21 |
This package does not include the distributed optimizer/training-state
|
| 22 |
checkpoint. Use the original `distributed_checkpoint/` directory if exact
|
| 23 |
training resume state is required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
- `transformer/config.json`
|
| 11 |
- `transformer/diffusion_pytorch_model.safetensors`
|
| 12 |
+
- `backend_snapshot/`
|
| 13 |
|
| 14 |
Training run:
|
| 15 |
|
|
|
|
| 22 |
This package does not include the distributed optimizer/training-state
|
| 23 |
checkpoint. Use the original `distributed_checkpoint/` directory if exact
|
| 24 |
training resume state is required.
|
| 25 |
+
|
| 26 |
+
`backend_snapshot/` contains the local FastVideo backend code used by this
|
| 27 |
+
checkpoint, including `SPARSE_FP4_OURS_P_ATTN`, its Triton forward/backward
|
| 28 |
+
kernel, FP4 quant helpers, VSA metadata helper, backend wiring, and the exact
|
| 29 |
+
SFT launch scripts.
|