YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Quantization made by Richard Erkhov.

Github

Discord

Request more models

NarraLarge - GGUF

Name Quant method Size
NarraLarge.Q2_K.gguf Q2_K 2.96GB
NarraLarge.IQ3_XS.gguf IQ3_XS 3.28GB
NarraLarge.IQ3_S.gguf IQ3_S 3.43GB
NarraLarge.Q3_K_S.gguf Q3_K_S 3.41GB
NarraLarge.IQ3_M.gguf IQ3_M 3.52GB
NarraLarge.Q3_K.gguf Q3_K 3.74GB
NarraLarge.Q3_K_M.gguf Q3_K_M 3.74GB
NarraLarge.Q3_K_L.gguf Q3_K_L 4.03GB
NarraLarge.IQ4_XS.gguf IQ4_XS 4.18GB
NarraLarge.Q4_0.gguf Q4_0 4.34GB
NarraLarge.IQ4_NL.gguf IQ4_NL 4.38GB
NarraLarge.Q4_K_S.gguf Q4_K_S 4.37GB
NarraLarge.Q4_K.gguf Q4_K 4.58GB
NarraLarge.Q4_K_M.gguf Q4_K_M 4.58GB
NarraLarge.Q4_1.gguf Q4_1 4.78GB
NarraLarge.Q5_0.gguf Q5_0 5.21GB
NarraLarge.Q5_K_S.gguf Q5_K_S 5.21GB
NarraLarge.Q5_K.gguf Q5_K 5.34GB
NarraLarge.Q5_K_M.gguf Q5_K_M 5.34GB
NarraLarge.Q5_1.gguf Q5_1 5.65GB
NarraLarge.Q6_K.gguf Q6_K 6.14GB
NarraLarge.Q8_0.gguf Q8_0 7.95GB

Original model description:

base_model: unsloth/llama-3-8b-Instruct language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft

Uploaded model

  • Developed by: PranavHarshan
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
22
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support