Marathi QLoRA Fine-Tune โ Qwen2.5-1.5B-Instruct
A QLoRA adapter fine-tuned on a Marathi Alpaca instruction dataset to improve Marathi language generation in Qwen2.5-1.5B-Instruct.
Training Details
- Base model: unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit
- Dataset: rachittshah/alpaca-marahti (34,499 samples after filtering)
- Method: QLoRA (r=16, alpha=32) via Unsloth
- Hardware: NVIDIA RTX 3050 Ti Laptop GPU (4 GB VRAM)
- Epochs: 1 | Steps: 4,097
- Final validation loss: 0.4479
Evaluation (chrF++)
Corpus chrF++ improved from 15.39 (base) โ 25.83 (fine-tuned) (+10.44 points) across 10 Marathi prompts.
Usage
from peft import PeftModel
from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit",
max_seq_length=512,
dtype=torch.float16,
load_in_4bit=True,
)
model = PeftModel.from_pretrained(model, "DragonLegend/marathi-qwen2.5-lora")
GitHub
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for DragonLegend/marathi-qwen2.5-lora
Base model
Qwen/Qwen2.5-1.5B Finetuned
Qwen/Qwen2.5-1.5B-Instruct Quantized
unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit