File size: 1,615 Bytes
6da8a05
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
language: code
tags:
- code-generation
- python
- fine-tuned
- qlora
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
datasets:
- iamtarun/python_code_instructions_18k_alpaca
license: mit
---

# Qwen2.5-Coder-0.5B Python Fine-tuned

Fine-tuned version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) for Python code generation.

## Model Details

- **Base Model**: Qwen/Qwen2.5-Coder-0.5B-Instruct
- **Fine-tuning Method**: QLoRA (4-bit quantization + LoRA adapters)
- **Dataset**: iamtarun/python_code_instructions_18k_alpaca
- **Task**: Python code generation from natural language instructions

## Training Details


- **Training Samples**: 16000
- **Validation Samples**: 1000
- **Epochs**: 3
- **Training Time**: N/A
- **Final Loss**: N/A

## Performance

- **Syntax Validity**: 95.2%
- **Pass@1**: 54.4%
- **Verbosity Reduction**: 95%

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("KpRT/qwen-python-finetuned")
tokenizer = AutoTokenizer.from_pretrained("KpRT/qwen-python-finetuned")

prompt = "Write a function to reverse a string"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(code)
```

## Citation

If you use this model, please cite:

```bibtex
@misc{qwen-python-finetuned,
  author = {K R T},
  title = {Qwen2.5-Coder Python Fine-tuned},
  year = {2026},
  publisher = {HuggingFace},
  url = {https://huggingface.co/KpRT/qwen-python-finetuned}
}
```