MaxedOut's picture
Update README.md
988169e
|
raw
history blame
3.73 kB
---
license: other
tags:
- comfyui
- flux
- sdxl
- gguf
- stable-diffusion
- t5xxl
- controlnet
- unet
- vae
- model-hub
- one-click
---
> ⚠️ **Work in Progress**
> This repo is actively being developed. Especially the the model card. Use as you see fit but know it's not considered finished yet.
# ComfyUI-Starter-Packs
> A curated vault of the most essential models for ComfyUI users. Flux1, SDXL, ControlNets, Clips, GGUFs all in one place. Carefully organized.
---
## πŸͺœ What's Inside
This repo is not a chaotic dumping ground. It’s a **purposeful collection** of the most important models:
### Flux1
- **Unet Models**: Dev, Schnell, Depth, Canny, Fill
- **GGUF Versions**: Q3, Q5, Q6 for each major branch
- **Clip + T5XXL** encoders (standard + GGUF versions)
- **Loras**: Only if they improve
### SDXL
- **Top Models** from Civitai (Realism, Stylized, Experimental)
- **Base + Refiner** official models
- **ControlNets**: Depth, Canny, OpenPose, Normal, etc.
### Extra
- VAE, upscalers, and anything required to support workflows
---
## πŸ‹οΈ Unet Recommendations (Based on VRAM)
| VRAM | Use Case | Model Type |
|------|----------|-------------|
| 16GB+ | Full-quality FP8 | flux1-dev-fp8.safetensors |
| 12GB | Balanced Q5_K_S | GGUF flux1-dev-Q5_K_S.gguf |
| 8GB | Light Q3_K_S | GGUF flux1-dev-Q3_K_S.gguf |
GGUF models are significantly lighter and designed for **low-VRAM** systems.
## 🧠 T5XXL Recommendations (Based on Ram)
| System RAM | Use Case | Model Type |
|------------|----------|-------------|
| 64GB | Max quality | t5xxl_fp16.safetensors |
| 32GB | High quality (can crash if multitasking) | t5xxl_fp16.safetensors |
| 16GB | Balanced | t5xxl_fp8_scaled.safetensors |
| <16GB | Low-memory / Safe mode | GGUF Q5_K_S or Q3_K_S |
> ⚠️ These are **recommended tiers**, not hard rules. RAM usage depends on your active processes, ComfyUI extensions, batch sizes, and other factors.
> If you're getting random crashes, try scaling down one tier.
---
## πŸ› Folder Structure (Flux1 Only)
```
Flux1/
β”œβ”€ unet/
β”‚ β”œβ”€ Dev/
β”‚ β”‚ β”œβ”€ flux1-dev-fp8.safetensors
β”‚ β”‚ └─ GGUF/
β”‚ β”œβ”€ Schnell/
β”‚ β”œβ”€ Depth/
β”‚ β”œβ”€ Canny/
β”‚ └─ Fill/
β”œβ”€ clip/
β”‚ β”œβ”€ t5xxl_fp16.safetensors
β”‚ β”œβ”€ GGUF/
β”‚ └─ ...
β”œβ”€ loras/
```
---
## πŸ“ˆ Model Previews (Coming Soon)
We will add a single grid-style graphic showing example outputs:
- **Dev vs Schnell**: Quality vs Speed
- **Depth / Canny / Fill**: Source image β†’ processed map β†’ output
- **SDXL examples**: Realism, Stylized, etc.
All preview images will be grouped into a single efficient visual block for each group.
---
## πŸ“’ Want It Even Easier?
Skip the manual downloads.
🎁 **[Patreon.com/MaxedOut](https://patreon.com)** β€” Get:
- One-click installers for all major Flux & SDXL workflows
- Organized ComfyUI folders built for beginners and pros
- Specialized templates (e.g. Mega Flux, Tiled Composites, Realistic Portraits)
- Behind-the-scenes model picks and tips
---
## ❓ FAQ
**Q: Why not every GGUF?**
A: Because Q3, Q5, and Q6 cover the most meaningful range. No bloat.
**Q: Are these the official models?**
A: Yes. Most are sourced directly from creators, or validated mirrors.
**Q: Will this grow?**
A: Yes. But only with purpose.
**Q: Why aren’t there more Loras here?**
A: Stylized or niche Loras are showcased on Patreon, where we do deeper dives and examples. Some may get added here later if they become foundational.
---
## ✨ Final Thoughts
You shouldn’t need to hunt through 12 Discord servers and 6 Civitai pages just to build your ComfyUI folder.
This repo fixes that.