Tiny LLM

Author: Rahul Dhole Base Model: Qwen/Qwen2.5-0.5B-Instruct

Tiny LLM is a fine-tuned language model by Rahul Dhole, built on top of Qwen2.5-0.5B-Instruct using LoRA/PEFT.

Training

  • Method: LoRA (r=8, alpha=32)
  • Epochs: 10
  • Learning Rate: 0.001
  • Data: data/dummy_train.jsonl
Downloads last month
28
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rahuldhole/tiny-llm-qwen-adapter

Base model

Qwen/Qwen2.5-0.5B
Adapter
(414)
this model

Space using rahuldhole/tiny-llm-qwen-adapter 1