YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Quantization made by Richard Erkhov.

Github

Discord

Request more models

full - GGUF

Name Quant method Size
full.Q2_K.gguf Q2_K 2.96GB
full.IQ3_XS.gguf IQ3_XS 3.28GB
full.IQ3_S.gguf IQ3_S 3.43GB
full.Q3_K_S.gguf Q3_K_S 3.41GB
full.IQ3_M.gguf IQ3_M 3.52GB
full.Q3_K.gguf Q3_K 3.74GB
full.Q3_K_M.gguf Q3_K_M 3.74GB
full.Q3_K_L.gguf Q3_K_L 4.03GB
full.IQ4_XS.gguf IQ4_XS 4.18GB
full.Q4_0.gguf Q4_0 4.34GB
full.IQ4_NL.gguf IQ4_NL 4.38GB
full.Q4_K_S.gguf Q4_K_S 4.37GB
full.Q4_K.gguf Q4_K 4.58GB
full.Q4_K_M.gguf Q4_K_M 4.58GB
full.Q4_1.gguf Q4_1 4.78GB
full.Q5_0.gguf Q5_0 5.21GB
full.Q5_K_S.gguf Q5_K_S 5.21GB
full.Q5_K.gguf Q5_K 5.34GB
full.Q5_K_M.gguf Q5_K_M 5.34GB
full.Q5_1.gguf Q5_1 5.65GB
full.Q6_K.gguf Q6_K 6.14GB
full.Q8_0.gguf Q8_0 7.95GB

Original model description:

base_model: unsloth/Meta-Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en

Uploaded model

  • Developed by: torwayfarer
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
17
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support