Uploaded GGUF Model

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
4
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Haary/USK_Mistral_7B_Unsloth_GGUF

Quantized
(2)
this model

Dataset used to train Haary/USK_Mistral_7B_Unsloth_GGUF

Space using Haary/USK_Mistral_7B_Unsloth_GGUF 1