Unsuccesful finetuned model. Model outputs are very low quality

  • Erudite-1.1b is a 1.1 billion parameter language model fine-tuned from TinyLlama, optimized for enhanced response quality and superior instruction-following capabilities. Through careful dataset curation and fine-tuning, this compact model delivers improved coherence and accuracy while maintaining efficient performance suitable for resource-constrained environments.

image

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
22
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Stormtrooperaim/Erudite-1.1b

Base model

unsloth/tinyllama
Finetuned
(22)
this model
Quantizations
3 models

Dataset used to train Stormtrooperaim/Erudite-1.1b

Collection including Stormtrooperaim/Erudite-1.1b