File size: 1,470 Bytes
d8edb01 2a7721e d8edb01 2a7721e d8edb01 2a7721e d8edb01 2a7721e d8edb01 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
language: en
tags:
- qwen2
- lora
- fine-tuned
- code-generation
- opencodeinstruct
license: apache-2.0
base_model: Qwen/Qwen2-0.5B-Instruct
---
# Qwen2-0.5b LoRA Fine-tuned on OpenCodeInstruct
This model is a LoRA fine-tuned version of Qwen/Qwen2-0.5B-Instruct on the OpenCodeInstruct dataset.
## Model Details
- **Base Model**: Qwen/Qwen2-0.5B-Instruct
- **Fine-tuning Dataset**: OpenCodeInstruct (300 samples)
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "alpayH/qwen2-0.5b-lora-opencodeinstruct")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("alpayH/qwen2-0.5b-lora-opencodeinstruct")
# Generate code
prompt = "### Instruction:\nWrite a Python function to reverse a string\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
```
## Training Details
- **Learning Rate**: 2e-4
- **Batch Size**: 16 (effective, with gradient accumulation)
- **Epochs**: 3
- **Precision**: bfloat16
## Evaluation
This model has been evaluated on LiveCodeBench. See the main repository for evaluation results.
## License
Apache 2.0
|