Ministral 3B Instruct (Abliterated) - GGUF

Quantized to Q4_K_M by NovachronoAI.

Usage (Termux / Android)

./llama-cli -m ministral-3b-abliterated-Q4_K_M.gguf -p "User: Hello\nAssistant:" -n 400 -t 4
Downloads last month
109
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for NovachronoAI/Ministral-3B-Instruct-abliterated-GGUF