jordigonzm commited on
Commit
d1d90a9
·
verified ·
1 Parent(s): 264f215

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -18
README.md CHANGED
@@ -48,25 +48,13 @@ docker run --rm --gpus all -p 8080:80 \
48
  your-image:tag
49
  ```
50
 
51
- **Conservative `COMFY_FLAGS` presets by GPU**
52
 
53
  These presets prioritize stability. They’re meant as safe defaults; you should still **set `COMFY_FLAGS` explicitly** for your deployment when you know your workload.
54
 
55
- **Tesla T4 / L4**
56
- - Recommended COMFY_FLAGS: `--lowvram --disable-xformers --disable-smart-memory --use-pytorch-cross-attention --cache-lru 1 --reserve-vram 3.5`
57
-
58
- **G10 | A10G | A10 | A40 | L40S| L40**
59
- - Recommended COMFY_FLAGS: `--normalvram --disable-xformers --use-pytorch-cross-attention --cache-lru 64 --reserve-vram 1.5`
60
-
61
- **A100 80GB | H100 | H200**
62
- - Recommended COMFY_FLAGS: `--normalvram --disable-xformers --use-pytorch-cross-attention --cache-lru 128 --reserve-vram 1.0`
63
-
64
- **By VRAM:**
65
-
66
- * `≥ 70 GB` → `--normalvram --disable-xformers --use-pytorch-cross-attention --cache-lru 256 --reserve-vram 1.0`
67
- * `≥ 22 GB` → `--normalvram --disable-xformers --use-pytorch-cross-attention --cache-lru 64 --reserve-vram 1.5`
68
- * `< 22 GB` → `--lowvram --disable-xformers --disable-smart-memory --use-pytorch-cross-attention --cache-lru 1 --reserve-vram 3.5`
69
-
70
- > Keep `--disable-xformers --use-pytorch-cross-attention` in all presets.
71
- > Use `--disable-smart-memory` for low VRAM profiles.
72
 
 
48
  your-image:tag
49
  ```
50
 
51
+ **Recommended `COMFY_FLAGS`**
52
 
53
  These presets prioritize stability. They’re meant as safe defaults; you should still **set `COMFY_FLAGS` explicitly** for your deployment when you know your workload.
54
 
55
+ `--disable-xformers --use-pytorch-cross-attention --cache-lru 2 --reserve-vram 2.0`
56
+
57
+ **Minimal recommended `hardware`**
58
+
59
+ - Nvidia L4: 8 vCPU, 32 GiB RAM, disk 40 GiB
 
 
 
 
 
 
 
 
 
 
 
 
60