Beebey's picture
Upload folder using huggingface_hub
b2457dd verified
|
raw
history blame
1.49 kB
metadata
language:
  - en
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
  - code
  - python
  - educational
  - lora
  - qwen
library_name: peft

Qwen2.5-Coder-1.5B-Educational (LoRA)

LoRA adapter for Qwen2.5-Coder-1.5B-Instruct fine-tuned on educational code generation.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel

Load base model

base_model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2.5-Coder-1.5B-Instruct", device_map="auto" )

Load LoRA adapter

model = PeftModel.from_pretrained(base_model, "YOUR_USERNAME/qwen-coder-1.5b-educational") tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/qwen-coder-1.5b-educational")

Generate code

prompt = "Instruction: Write a Python function to reverse a string Réponse: " inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs, skip_special_tokens=True))

Training Details

  • Method: LoRA (r=8, alpha=16, dropout=0.05)
  • Dataset: OpenCoder-LLM/opc-sft-stage2 (educational_instruct)
  • Steps: 2000
  • Final Loss: 0.530
  • Hardware: TPU v6e-16
  • Training Time: 43 minutes

Performance

Improved over base model on:

  • Educational Python code generation
  • Pythonic idioms and patterns
  • Object-oriented architecture
  • Code documentation and comments

License

Apache 2.0