YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Quantization made by Richard Erkhov.

Github

Discord

Request more models

TM_v2_mod - GGUF

Name Quant method Size
TM_v2_mod.Q2_K.gguf Q2_K 2.96GB
TM_v2_mod.IQ3_XS.gguf IQ3_XS 3.28GB
TM_v2_mod.IQ3_S.gguf IQ3_S 3.43GB
TM_v2_mod.Q3_K_S.gguf Q3_K_S 3.41GB
TM_v2_mod.IQ3_M.gguf IQ3_M 3.52GB
TM_v2_mod.Q3_K.gguf Q3_K 3.74GB
TM_v2_mod.Q3_K_M.gguf Q3_K_M 3.74GB
TM_v2_mod.Q3_K_L.gguf Q3_K_L 4.03GB
TM_v2_mod.IQ4_XS.gguf IQ4_XS 4.18GB
TM_v2_mod.Q4_0.gguf Q4_0 4.34GB
TM_v2_mod.IQ4_NL.gguf IQ4_NL 4.38GB
TM_v2_mod.Q4_K_S.gguf Q4_K_S 4.37GB
TM_v2_mod.Q4_K.gguf Q4_K 4.58GB
TM_v2_mod.Q4_K_M.gguf Q4_K_M 4.58GB
TM_v2_mod.Q4_1.gguf Q4_1 4.78GB
TM_v2_mod.Q5_0.gguf Q5_0 5.21GB
TM_v2_mod.Q5_K_S.gguf Q5_K_S 5.21GB
TM_v2_mod.Q5_K.gguf Q5_K 5.34GB
TM_v2_mod.Q5_K_M.gguf Q5_K_M 5.34GB
TM_v2_mod.Q5_1.gguf Q5_1 5.65GB
TM_v2_mod.Q6_K.gguf Q6_K 6.14GB
TM_v2_mod.Q8_0.gguf Q8_0 7.95GB

Original model description:

base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en

Uploaded model

  • Developed by: TobInnovate
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
13
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support