Qwen2.5-Coder-1.5B Docstring SFT (LoRA r=64)

A LoRA adapter fine-tuned on top of Qwen2.5-Coder-1.5B-Instruct for Python docstring generation.

Results

Model BLEU ROUGE-L
Baseline (no fine-tuning) 0.0063 0.1064
This model (LoRA r=64) 0.0381 0.2188

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")
model = PeftModel.from_pretrained(base_model, "galileo680/qwen2.5-coder-1.5b-docstring-sft")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")

messages = [
    {"role": "system", "content": "You are a helpful coding assistant specialized in generating Python docstrings."},
    {"role": "user", "content": "Generate a Python docstring for the following function:\n\n```python\ndef calculate_distance(p1, p2):\n    dx = p1[0] - p2[0]\n    dy = p1[1] - p2[1]\n    return (dx**2 + dy**2) ** 0.5\n```"}
]

input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training

  • Method: QLoRA (4-bit NF4 + LoRA r=64, alpha=128)
  • Data: 22,500 examples from CodeSearchNet (Python), quality-filtered
  • Epochs: 3
  • Hardware: Google Colab A100

Full details: GitHub repository

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for galileo680/qwen2.5-coder-1.5b-docstring-sft

Adapter
(100)
this model