Update README.md
Browse files
README.md
CHANGED
|
@@ -129,7 +129,7 @@ output = pipeline(
|
|
| 129 |
output.save("flux2_output.png")
|
| 130 |
```
|
| 131 |
|
| 132 |
-
**Using the model with
|
| 133 |
|
| 134 |
```python
|
| 135 |
import torch
|
|
|
|
| 129 |
output.save("flux2_output.png")
|
| 130 |
```
|
| 131 |
|
| 132 |
+
**Using the model with a quantized text encoder:** Even with the text encoder quantized, the model still has high memory requirements. While it may run on an **H100**, the available VRAM would be very tight, so it is recommended to use GPUs with **more than 85 GB of VRAM** to ensure stable execution and avoid OOM issues.
|
| 133 |
|
| 134 |
```python
|
| 135 |
import torch
|