Fredtt3 commited on
Commit
380f8dd
·
verified ·
1 Parent(s): 2a25462

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -129,7 +129,7 @@ output = pipeline(
129
  output.save("flux2_output.png")
130
  ```
131
 
132
- **Using the model with the quantized text encoder**, you would still need an H100, although you would be pushing it, so preferably use GPUs with more than 85GB of VRAM
133
 
134
  ```python
135
  import torch
 
129
  output.save("flux2_output.png")
130
  ```
131
 
132
+ **Using the model with a quantized text encoder:** Even with the text encoder quantized, the model still has high memory requirements. While it may run on an **H100**, the available VRAM would be very tight, so it is recommended to use GPUs with **more than 85 GB of VRAM** to ensure stable execution and avoid OOM issues.
133
 
134
  ```python
135
  import torch