AITRADER commited on
Commit
929cbb0
·
verified ·
1 Parent(s): bbfc3c4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -12,18 +12,18 @@ tags:
12
  - quantized
13
  ---
14
 
15
- # LTX-2 19B Dev (4-bit) - MLX
16
 
17
- This is a 4-bit quantized version of the [LTX-2 19B Dev](https://huggingface.co/Lightricks/LTX-2) model, optimized for Apple Silicon using MLX.
18
 
19
  ## Model Description
20
 
21
- LTX-2 is a state-of-the-art video generation model from Lightricks. This version has been quantized to 4-bit precision for efficient inference on Apple Silicon devices with MLX.
22
 
23
  ### Key Features
24
 
25
  - **Pipeline**: Dev (full control with CFG scale)
26
- - **Quantization**: 4-bit precision
27
  - **Framework**: MLX (Apple Silicon optimized)
28
  - **Memory**: ~19GB VRAM required
29
 
@@ -40,14 +40,14 @@ pip install git+https://github.com/CharafChnioune/mlx-video.git
40
  ```bash
41
  # Basic generation
42
  mlx-video --prompt "A beautiful sunset over the ocean" \
43
- --model-repo AITRADER/ltx2-dev-4bit-mlx \
44
  --pipeline dev \
45
  --height 512 --width 512 \
46
  --num-frames 33
47
 
48
  # Dev pipeline with CFG
49
  mlx-video --prompt 'A cat playing with yarn' \\
50
- --model-repo AITRADER/ltx2-dev-4bit-mlx \\
51
  --pipeline dev \\
52
  --steps 40 --cfg-scale 4.0
53
  ```
@@ -59,7 +59,7 @@ from mlx_video import generate_video
59
 
60
  video = generate_video(
61
  prompt="A beautiful sunset over the ocean",
62
- model_repo="AITRADER/ltx2-dev-4bit-mlx",
63
  pipeline="dev",
64
  height=512,
65
  width=512,
@@ -69,7 +69,7 @@ video = generate_video(
69
 
70
  ## Model Files
71
 
72
- - `ltx-2-19b-dev-mlx.safetensors` - Main model weights (4-bit quantized)
73
  - `quantization.json` - Quantization configuration
74
  - `config.json` - Model configuration
75
  - `layer_report.json` - Layer information
 
12
  - quantized
13
  ---
14
 
15
+ # LTX-2 19B Dev (8-bit) - MLX
16
 
17
+ This is a 8-bit quantized version of the [LTX-2 19B Dev](https://huggingface.co/Lightricks/LTX-2) model, optimized for Apple Silicon using MLX.
18
 
19
  ## Model Description
20
 
21
+ LTX-2 is a state-of-the-art video generation model from Lightricks. This version has been quantized to 8-bit precision for efficient inference on Apple Silicon devices with MLX.
22
 
23
  ### Key Features
24
 
25
  - **Pipeline**: Dev (full control with CFG scale)
26
+ - **Quantization**: 8-bit precision
27
  - **Framework**: MLX (Apple Silicon optimized)
28
  - **Memory**: ~19GB VRAM required
29
 
 
40
  ```bash
41
  # Basic generation
42
  mlx-video --prompt "A beautiful sunset over the ocean" \
43
+ --model-repo AITRADER/ltx2-dev-8bit-mlx \
44
  --pipeline dev \
45
  --height 512 --width 512 \
46
  --num-frames 33
47
 
48
  # Dev pipeline with CFG
49
  mlx-video --prompt 'A cat playing with yarn' \\
50
+ --model-repo AITRADER/ltx2-dev-8bit-mlx \\
51
  --pipeline dev \\
52
  --steps 40 --cfg-scale 4.0
53
  ```
 
59
 
60
  video = generate_video(
61
  prompt="A beautiful sunset over the ocean",
62
+ model_repo="AITRADER/ltx2-dev-8bit-mlx",
63
  pipeline="dev",
64
  height=512,
65
  width=512,
 
69
 
70
  ## Model Files
71
 
72
+ - `ltx-2-19b-dev-mlx.safetensors` - Main model weights (8-bit quantized)
73
  - `quantization.json` - Quantization configuration
74
  - `config.json` - Model configuration
75
  - `layer_report.json` - Layer information