LTX-2 Rapid GGUF

GGUF version of the original LTX-2 Rapid, converted in colab (this is link to the jupyter notebook script) using city96 gguf convertion instructions with manual adding config field to metadata.

Roadmap

If someone will provide a test result for different quants to compare, it will be too good! PR into this repo

Developing

If you're making custom ggufs, do not forget to add this to gguf metadata into the config string field:

    {"transformer": {"_class_name": "AVTransformer3DModel", "_diffusers_version": "0.25.1", "activation_fn": "gelu-approximate", "attention_bias": true, "attention_head_dim": 128, "attention_type": "default", "caption_channels": 3840, "cross_attention_dim": 4096, "double_self_attention": false, "dropout": 0.0, "in_channels": 128, "norm_elementwise_affine": false, "norm_eps": 1e-06, "norm_num_groups": 32, "num_attention_heads": 32, "num_embeds_ada_norm": 1000, "num_layers": 48, "num_vector_embeds": null, "only_cross_attention": false, "cross_attention_norm": true, "out_channels": 128, "upcast_attention": false, "use_linear_projection": false, "qk_norm": "rms_norm", "standardization_norm": "rms_norm", "positional_embedding_type": "rope", "positional_embedding_theta": 10000.0, "positional_embedding_max_pos": [20, 2048, 2048], "timestep_scale_multiplier": 1000, "av_ca_timestep_scale_multiplier": 1000.0, "causal_temporal_positioning": true, "audio_num_attention_heads": 32, "audio_attention_head_dim": 64, "use_audio_video_cross_attention": true, "share_ff": false, "audio_out_channels": 128, "audio_cross_attention_dim": 2048, "audio_positional_embedding_max_pos": [20], "av_cross_ada_norm": true, "use_embeddings_connector": true, "connector_attention_head_dim": 128, "connector_num_attention_heads": 30, "connector_num_layers": 2, "connector_positional_embedding_max_pos": [4096], "connector_num_learnable_registers": 128, "connector_norm_output": true, "use_middle_indices_grid": true, "rope_type": "split", "frequencies_precision": "float64"}, "scheduler": {"_class_name": "RectifiedFlowScheduler", "_diffusers_version": "0.25.1", "num_train_timesteps": 1000, "shifting": null, "base_resolution": null, "sampler": "LinearQuadratic"}}

(config can change and should be taken from original to work properly)

Downloads last month
5,497
GGUF
Model size
19B params
Architecture
ltxv
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 3ndetz/LTX2-Rapid-Merges-GGUF

Base model

Lightricks/LTX-2
Quantized
(1)
this model