🎬 LTX-2.3 NVFP4 Quantized Models

Compress LTX-2.3 from 46GB β†’ 13GB while maintaining most of the quality

These are NVFP4 quantized variants of LTX-2.3, optimized for users with limited VRAM who still want high-quality video generation. Along with the models, I'm sharing Comfy Bathroom β€” a custom node suite designed specifically for using LoRAs with these heavily quantized models.


πŸ“¦ Available Models

Model Size Description
LTX-2.3-FP4 ~21GB Standard NVFP4 quantization
LTX-2.3-FP4ME ~13GB Mixed Extreme - Blocks 0, 47 in BF16; Block 1, 46 in FP8

πŸ“¦ Coming Soon Models

Model Size Description
LTX-2.3-FP4MEL ~15GB Mixed Extreme LoRA-friendly

⚠️ Note: FP4ME variants have known LoRA compatibility issues. Use the included Comfy Bathroom nodes for best results.


🚿 Comfy Bathroom β€” Custom Node Suite

Comfy Bathroom is a LoRA loading system designed to work with NVFP4 overly quantized models. It provides per-block weight control to avoid artifacts when using LoRAs on aggressively quantized models.

Why "Bathroom"?

Because I made the FP4 models with Comfy Kitchen, and when something goes wrong in the kitchen... you end up in the bathroom. 🚽

πŸͺ₯ Nodes Included

Node Icon Description
Toothbrush πŸͺ₯ LoRA loader with built-in FP4 presets
Mirror (Simple) πŸͺž Per-block on/off toggles
Mirror (Fancy) πŸͺž Per-block 0.0-1.0 strength sliders
Shower 🚿 Quick preset applier
Towel 🧾 LoRA packet info display
Bathroom Sink 🚰 Stack multiple LoRAs with global strength

πŸ”§ How It Works

The Problem

When using LoRAs on NVFP4 quantized models, certain transformer blocks cause artifacts:

  • Low blocks (0-10) β†’ Smoky/cloudy output (noise generation interference)
  • High blocks (40-47) β†’ Double vision/ghosting (detail alignment issues)
  • Middle blocks (11-39) β†’ Usually stable for LoRA injection

The Solution

Comfy Bathroom applies intelligent weight curves to transformer blocks, ramping up LoRA strength gradually in early blocks and tapering down in late blocks.

Built-in Presets

Preset Description Best For
FP4ME Light Gentle ramp up/down, block 1 OFF Style LoRAs, general use
FP4ME Heavy Aggressive taper to 0 at block 46 Heavy style modifications
FP4MEL Light Offset for BF16 blocks 1, 46 FP4MEL model + style LoRAs
FP4MEL Heavy Aggressive taper with BF16 safe zones FP4MEL + content LoRAs

Weight Curve Visualization

FP4ME Light:
Block:  0   1   2   3   4   5   6   7   8   9  10  11 ... 39  40  41  42  43  44  45  46  47
Weight: 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1.0 1.0 ... 1.0 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.5 1.0
        BF16 FP8  ←── RAMP UP ──→  FULL   ←────── TAPER DOWN ──→  FP8  BF16

πŸ“Έ Example Workflow

Comfy Bathroom Workflow

Basic Usage

Toothbrush (LoRA + Preset) ──► Bathroom Sink ──► Model
                                    β”‚
                              global_strength

Advanced Usage

Toothbrush ──► Shower (swap preset) ──► Mirror (fine tune) ──► Bathroom Sink ──► Model
      β”‚                                                              β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              Multiple LoRAs can be stacked

πŸ’‘ Usage Tips

For Style LoRAs (90% of use cases)

  • Use FP4ME Light preset
  • The base model already knows anatomy/structure
  • LoRA only needs to affect the "style layer" (middle blocks)

For Content LoRAs (specific characters, unusual anatomy)

  • Try FP4ME Heavy preset
  • May need to experiment with Mirror nodes for fine-tuning
  • Content LoRAs teach new concepts β†’ need more block coverage

For Multiple LoRAs

  • Connect multiple Toothbrush nodes to Bathroom Sink
  • Use global_strength to scale everything together
  • Individual LoRA strengths multiply with global

πŸ“‹ Requirements

  • ComfyUI (latest version recommended)
  • comfy_kitchen (for NVFP4 model loading)
  • BF16/FP16/FP8/FP4 capable GPU (known to work on 3xxx and above)
  • 16GB+ VRAM recommended, seen people use it on 12GB

πŸ“ Installation

  1. Download model files to ComfyUI/models/checkpoints/ *unless you have a working unet loader for ltx2.3
  2. Install Comfy Bathroom:
    ComfyUI/custom_nodes/comfy_bathroom.py
    
  3. Restart ComfyUI

πŸ™ Credits

  • LTX-2.3 by Lightricks
  • NVFP4 Quantization via Modified Comfy Kitchen
  • Comfy Bathroom Nodes written by GLM5 (Z.ai)

πŸ“œ License

This model is subject to the LTX-2 Community License Agreement.


When your LoRA is full of artifacts, head to the Bathroom. 🚿

Downloads last month
116
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support