AurIA-Q3-v2-gguf : GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: ./llama.cpp/llama-cli -hf wallacebf/AurIA-Q3-v2-gguf --jinja
  • For multimodal models: ./llama.cpp/llama-mtmd-cli -hf wallacebf/AurIA-Q3-v2-gguf --jinja

Available Model files:

  • Qwen3-4B-Instruct-2507-heretic.Q8_0.gguf
  • Qwen3-4B-Instruct-2507-heretic.Q4_K_M.gguf This was trained 2x faster with Unsloth
Downloads last month
54
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support