Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

⭐GGUFmaker-portable⭐

This toolkit contains scripts that help you process SDXLv1.0 .safetensors models into parts and then quantize them into GGUF format.

🟨Usage

lightblue
  1. Run "extract-parts.ps1" with PowerShell, select a full safetensors model. It will extract UNet, CLIP_L, CLIP_G and VAE into the PARTS subdirectory

  2. Run "do-convert.ps" with PowerShell, select the <modelname>_unet.safetensors from the PARTS subdirectory. Then choose one of the three quantization options

  3. Copy <modelname>_clip_g.safetensors and <modelname>_clip_l.safetensors from the GGUF directory to <ComfyUI_basedir>\ComfyUI\models\CLIP

  4. Copy <modelname>_unet_QX_X.gguf from the GGUF directory to <ComfyUI_basedir>\ComfyUI\models\diffusion_models (NOT checkpoints!)

  5. Copy <modelname>_vae.safetensors from the GGUF directory to <ComfyUI_basedir>\ComfyUI\models\VAE or use a SDXL VAE of your choice

  6. Start ComfyUI, open ComfyUI-manager and search for the "ComfyUI-GGUF" custom node pack. Install it, restart ComfyUI

  7. Load the example workflow from workflow\basgen_gguf.png. When using GGUFs ComfyUI cannot detect the model type properly, so use the "ModelSamplingDiscrete" Node appropriately. If you are using an EPS model you must bypass the "RescaleCFG" node.

The intermediate F16 GGUF is kept in the GGUF folder after you exit the script and can be reused or deleted. Your PowerShell ExecutionPolicy must be set to RemoteSigned.

🟨Components

lightblue

🟨extract_parts.ps1 and tools\extract_components2.py

lightblue

Purpose: Extracts the individual model components from a .safetensors file:

  • UNet
  • VAE
  • CLIP_L
  • CLIP_G

The script automatically adjusts and normalizes CLIP_L and CLIP_G keys for compatibility with ComfyUI and other SDXL-based workflows.

Usage:

  1. Right-click -> “Run with PowerShell”.
  2. Choose a .safetensors model file.
  3. Wait for the conversion step to finish.

Output: Individual files are saved to: <script_directory>\PARTS

🟨do-convert.ps1, tools\convert.py and llama\llama-quantize.exe

lightblue

Purpose: Automates model conversion and quantization. Lets you pick a .safetensors file, converts it to a F16 GGUF using convert.py and then runs llama-quantize.exe to produce GGUF models.

Usage:

  1. Right-click -> “Run with PowerShell”.
  2. Choose a .safetensors Unet file.
  3. Wait for the conversion step to finish.
  4. Choose your desired quantization type.
  5. Press X to go back or exit after the process finishes.
  • Output: GGUF files are saved to: <script_directory>\GGUF

🟨environment.bat

lightblue

This portable setup does not use a Python Venv but its own dedicated portable Python. The environment.bat starts a commandline with the proper environment variables should you feel the need to change anything. It is not used by the conversion scripts.

🟨Credits

lightblue

This toolkit was created from combining resources from different places:

  • convert.py
    Part of the ComfyUI-GGUF custom node pack.
  • llama-quantize.exe
    Build with VS2022 as described on the ComfyUI-GGUF GitHub. I expect this needs a current Visual C++ runtime installed.
  • extract_components2.py
    Combined from the llama.cpp GitHub and a Google Collab Workbook found on Civitai.

I added the PowerShell wrapper scripts and portable setup, and cannot take any responsibility for any these components.

🟨Sources:

lightblue

https://www.reddit.com/r/StableDiffusion/comments/1hgav56/how_to_run_sdxl_on_a_potato_pc/
https://github.com/ggml-org/llama.cpp/discussions/2948
https://github.com/city96/ComfyUI-GGUF/tree/main/tools
https://civitai.com/articles/10417
https://colab.research.google.com/drive/1xRwSht2tc82O8jrQQG4cl5LH1xdhiCyn?usp=sharing#scrollTo=NoydfIL5BEjs
https://github.com/ggml-org/llama.cpp

Downloads last month
15