# AniNixIm-G (GGUF)

This repository contains GGUF quantized versions of [ray0rf1re/AniNixIm](https://huggingface.co/ray0rf1re/AniNixIm).

## Available Files
- `AniNixIm.f16.gguf`
  • AniNixIm.bf16.gguf

    Usage

    These files are designed for use with stable-diffusion.cpp or KoboldCpp.

    CLI Example

    ./sd -m AniNixIm.f16.gguf -p "anime style, 1girl, smiling" --steps 20
    
Downloads last month
41
GGUF
Model size
3B params
Architecture
stable-diffusion
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support