LoRA Fine-Tuned Qwen2.5-1.5B-Instruct

This model is a LoRA fine-tuned version of Qwen2.5-1.5B-Instruct, optimized for instruction-following tasks.

  • Base model: Qwen/Qwen2.5-1.5B-Instruct
  • Method: Parameter-efficient fine-tuning with PEFT (LoRA)
  • Framework: πŸ€— Transformers + PEFT
  • Use case: Conversational AI, instruction following, Q&A

πŸš€ Usage

Install dependencies

pip install transformers accelerate peft
Downloads last month
-
Safetensors
Model size
2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support