Tiny LLM
Author: Rahul Dhole Base Model: Qwen/Qwen2.5-0.5B-Instruct
Tiny LLM is a fine-tuned language model by Rahul Dhole, built on top of Qwen2.5-0.5B-Instruct using LoRA/PEFT.
Training
- Method: LoRA (r=8, alpha=32)
- Epochs: 10
- Learning Rate: 0.001
- Data: data/dummy_train.jsonl
- Downloads last month
- 28
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support