Unsuccesful finetuned model. Model outputs are very low quality
- Erudite-1.1b is a 1.1 billion parameter language model fine-tuned from TinyLlama, optimized for enhanced response quality and superior instruction-following capabilities. Through careful dataset curation and fine-tuning, this compact model delivers improved coherence and accuracy while maintaining efficient performance suitable for resource-constrained environments.

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.