File size: 3,732 Bytes
fa5ea3b 6d61ba5 fa5ea3b 6d61ba5 fa5ea3b 988169e 6d61ba5 39e8aec 988169e 39e8aec 6d61ba5 39e8aec 6d61ba5 39e8aec 6d61ba5 39e8aec 6d61ba5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
license: other
tags:
- comfyui
- flux
- sdxl
- gguf
- stable-diffusion
- t5xxl
- controlnet
- unet
- vae
- model-hub
- one-click
---
> β οΈ **Work in Progress**
> This repo is actively being developed. Especially the the model card. Use as you see fit but know it's not considered finished yet.
# ComfyUI-Starter-Packs
> A curated vault of the most essential models for ComfyUI users. Flux1, SDXL, ControlNets, Clips, GGUFs all in one place. Carefully organized.
---
## πͺ What's Inside
This repo is not a chaotic dumping ground. Itβs a **purposeful collection** of the most important models:
### Flux1
- **Unet Models**: Dev, Schnell, Depth, Canny, Fill
- **GGUF Versions**: Q3, Q5, Q6 for each major branch
- **Clip + T5XXL** encoders (standard + GGUF versions)
- **Loras**: Only if they improve
### SDXL
- **Top Models** from Civitai (Realism, Stylized, Experimental)
- **Base + Refiner** official models
- **ControlNets**: Depth, Canny, OpenPose, Normal, etc.
### Extra
- VAE, upscalers, and anything required to support workflows
---
## ποΈ Unet Recommendations (Based on VRAM)
| VRAM | Use Case | Model Type |
|------|----------|-------------|
| 16GB+ | Full-quality FP8 | flux1-dev-fp8.safetensors |
| 12GB | Balanced Q5_K_S | GGUF flux1-dev-Q5_K_S.gguf |
| 8GB | Light Q3_K_S | GGUF flux1-dev-Q3_K_S.gguf |
GGUF models are significantly lighter and designed for **low-VRAM** systems.
## π§ T5XXL Recommendations (Based on Ram)
| System RAM | Use Case | Model Type |
|------------|----------|-------------|
| 64GB | Max quality | t5xxl_fp16.safetensors |
| 32GB | High quality (can crash if multitasking) | t5xxl_fp16.safetensors |
| 16GB | Balanced | t5xxl_fp8_scaled.safetensors |
| <16GB | Low-memory / Safe mode | GGUF Q5_K_S or Q3_K_S |
> β οΈ These are **recommended tiers**, not hard rules. RAM usage depends on your active processes, ComfyUI extensions, batch sizes, and other factors.
> If you're getting random crashes, try scaling down one tier.
---
## π Folder Structure (Flux1 Only)
```
Flux1/
ββ unet/
β ββ Dev/
β β ββ flux1-dev-fp8.safetensors
β β ββ GGUF/
β ββ Schnell/
β ββ Depth/
β ββ Canny/
β ββ Fill/
ββ clip/
β ββ t5xxl_fp16.safetensors
β ββ GGUF/
β ββ ...
ββ loras/
```
---
## π Model Previews (Coming Soon)
We will add a single grid-style graphic showing example outputs:
- **Dev vs Schnell**: Quality vs Speed
- **Depth / Canny / Fill**: Source image β processed map β output
- **SDXL examples**: Realism, Stylized, etc.
All preview images will be grouped into a single efficient visual block for each group.
---
## π’ Want It Even Easier?
Skip the manual downloads.
π **[Patreon.com/MaxedOut](https://patreon.com)** β Get:
- One-click installers for all major Flux & SDXL workflows
- Organized ComfyUI folders built for beginners and pros
- Specialized templates (e.g. Mega Flux, Tiled Composites, Realistic Portraits)
- Behind-the-scenes model picks and tips
---
## β FAQ
**Q: Why not every GGUF?**
A: Because Q3, Q5, and Q6 cover the most meaningful range. No bloat.
**Q: Are these the official models?**
A: Yes. Most are sourced directly from creators, or validated mirrors.
**Q: Will this grow?**
A: Yes. But only with purpose.
**Q: Why arenβt there more Loras here?**
A: Stylized or niche Loras are showcased on Patreon, where we do deeper dives and examples. Some may get added here later if they become foundational.
---
## β¨ Final Thoughts
You shouldnβt need to hunt through 12 Discord servers and 6 Civitai pages just to build your ComfyUI folder.
This repo fixes that. |