This is the unquantized BF16 version of the model AesCoder-4B that is the same as the safetensors with no quality loss.

Downloads last month
8
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/AesCoder-4B-GGUF

Quantized
(10)
this model