The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
⭐GGUFmaker-portable⭐
This toolkit contains scripts that help you process SDXLv1.0 .safetensors models into parts and then quantize them into GGUF format.
🟨Usage
Run "extract-parts.ps1" with PowerShell, select a full safetensors model. It will extract UNet, CLIP_L, CLIP_G and VAE into the
PARTSsubdirectoryRun "do-convert.ps" with PowerShell, select the
<modelname>_unet.safetensorsfrom thePARTSsubdirectory. Then choose one of the three quantization optionsCopy
<modelname>_clip_g.safetensorsand<modelname>_clip_l.safetensorsfrom theGGUFdirectory to<ComfyUI_basedir>\ComfyUI\models\CLIPCopy
<modelname>_unet_QX_X.gguffrom theGGUFdirectory to<ComfyUI_basedir>\ComfyUI\models\diffusion_models(NOT checkpoints!)Copy
<modelname>_vae.safetensorsfrom theGGUFdirectory to<ComfyUI_basedir>\ComfyUI\models\VAEor use a SDXL VAE of your choiceStart ComfyUI, open ComfyUI-manager and search for the "ComfyUI-GGUF" custom node pack. Install it, restart ComfyUI
Load the example workflow from
workflow\basgen_gguf.png. When using GGUFs ComfyUI cannot detect the model type properly, so use the "ModelSamplingDiscrete" Node appropriately. If you are using an EPS model you must bypass the "RescaleCFG" node.
The intermediate F16 GGUF is kept in the GGUF folder after you exit the script and can be reused or deleted. Your PowerShell ExecutionPolicy must be set to RemoteSigned.
🟨Components
🟨extract_parts.ps1 and tools\extract_components2.py
Purpose: Extracts the individual model components from a .safetensors file:
- UNet
- VAE
- CLIP_L
- CLIP_G
The script automatically adjusts and normalizes CLIP_L and CLIP_G keys for compatibility with ComfyUI and other SDXL-based workflows.
Usage:
- Right-click -> “Run with PowerShell”.
- Choose a .safetensors model file.
- Wait for the conversion step to finish.
Output:
Individual files are saved to: <script_directory>\PARTS
🟨do-convert.ps1, tools\convert.py and llama\llama-quantize.exe
Purpose: Automates model conversion and quantization. Lets you pick a .safetensors file, converts it to a F16 GGUF using convert.py and then runs llama-quantize.exe to produce GGUF models.
Usage:
- Right-click -> “Run with PowerShell”.
- Choose a .safetensors Unet file.
- Wait for the conversion step to finish.
- Choose your desired quantization type.
- Press X to go back or exit after the process finishes.
- Output: GGUF files are saved to:
<script_directory>\GGUF
🟨environment.bat
This portable setup does not use a Python Venv but its own dedicated portable Python. The environment.bat starts a commandline with the proper environment variables should you feel the need to change anything. It is not used by the conversion scripts.
🟨Credits
This toolkit was created from combining resources from different places:
- convert.py
Part of the ComfyUI-GGUF custom node pack. - llama-quantize.exe
Build with VS2022 as described on the ComfyUI-GGUF GitHub. I expect this needs a current Visual C++ runtime installed. - extract_components2.py
Combined from the llama.cpp GitHub and a Google Collab Workbook found on Civitai.
I added the PowerShell wrapper scripts and portable setup, and cannot take any responsibility for any these components.
🟨Sources:
https://www.reddit.com/r/StableDiffusion/comments/1hgav56/how_to_run_sdxl_on_a_potato_pc/
https://github.com/ggml-org/llama.cpp/discussions/2948
https://github.com/city96/ComfyUI-GGUF/tree/main/tools
https://civitai.com/articles/10417
https://colab.research.google.com/drive/1xRwSht2tc82O8jrQQG4cl5LH1xdhiCyn?usp=sharing#scrollTo=NoydfIL5BEjs
https://github.com/ggml-org/llama.cpp
- Downloads last month
- 15