Quantized gguf version of auraflow 0.3 Original author https://huggingface.co/fal/AuraFlow-v0.3

ko-fi Discord

Downloads last month
7
GGUF
Model size
7B params
Architecture
aura
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including marduk191/auraflow_0.3_quantized