Upscaler models from https://openmodeldb.info/ , converted to GGUF format for stable-diffusion.cpp.

stable-diffusion.cpp can work with most .pth files directly, but gguf models can be faster to load, and avoid security issues with the .pth format.

See the original URLs for licensing information. This is a straight mechanical conversion, so each model keeps the same license as the original.

The models were converted with stable-diffusion.cpp itself:

./sd-cli --mode convert --model modelfile.pth --output modelfile.gguf

A few old .pth files needed to have their format updated first:

import torch
checkpoint = torch.load(r"model.pth",map_location="cpu")
torch.save(checkpoint,r"model_converted.pth",_use_new_zipfile_serialization=True)
Downloads last month
309
GGUF
Model size
16.7M params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support