How to use from
Docker Model Runner
docker model run hf.co/dbands/tantrum_4bit
Quick Links

Uploaded model

  • Developed by: dbands
  • License: apache-2.0
  • Finetuned from model : dbands/tantrum_16bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
8
Safetensors
Model size
8B params
Tensor type
BF16
F32
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for dbands/tantrum_4bit

Base model

Qwen/Qwen2-7B
Quantized
(2)
this model