YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Chuck Fn'S Model, GGUF version to run on low VRAM/RAM builds. Done with: https://colab.research.google.com/drive/1xRwSht2tc82O8jrQQG4cl5LH1xdhiCyn?usp=sharing

Go to your ComfyUI 'models' folder, and drag/drop the files in their respective folders.

You need the following custom node to load GGUF models: https://github.com/city96/ComfyUI-GGUF (Install manually or via ComfyUI Manager).

Then, load this workflow (Right click > Download Link, then drag and drop into ComfyUI): https://huggingface.co/Coercer/ChuckFnS_GGUF/resolve/main/GGUF_Workflow.json

Requirements, from higher RAM to lower RAM:

Normal .safetensors > .safetensors at fp8 = Q8_0 > Q6_K > Q5_K_M > Q5_K_S > Q4_K_M > Q4_K_S > Q3_K_L > Q3_K_M > Q3_K_S > Q2_K

(Probably, the lowest ones have reduced quality, although that's for you to check in your own prompts. The agreed minimum acceptable quality is said to be Q4_K_M, but it ultimately depends on the style you're aimimng at).

If you REALLY need to save VRAM, use the modular workflow. You'll need this custom node to run it: https://github.com/endman100/ComfyUI-SaveAndLoadPromptCondition.git (Use install via GIT Url option). Further instructions are present in the workflows.

Downloads last month
39
GGUF
Model size
3B params
Architecture
sdxl
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support