File size: 3,045 Bytes
c73b557
b471136
 
 
 
 
 
 
 
c73b557
b471136
 
c73b557
 
b471136
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135aba5
 
 
 
 
b471136
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
language:
- en
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- lora
- code
- code-generation
- qwen
library_name: transformers
datasets:
- Naholav/CodeGen-Deep-5K
---

# Qwen2.5-Coder-1.5B LoRA Fine-tuned (DEEP Dataset)

Bu model, Qwen2.5-Coder-1.5B-Instruct base modeli kullanılarak DEEP dataset üzerinde LoRA ile fine-tune edilmiş ve base model ile merge edilmiştir.

## 🎯 Model Açıklaması

- **Base Model:** Qwen/Qwen2.5-Coder-1.5B-Instruct
- **Dataset:** Naholav/CodeGen-DEEP-5K
- **Training Step:** 1128
- **Method:** LoRA (Low-Rank Adaptation)
- **Merge Status:** Base model ile merge edildi

## 📊 Training Hyperparameters
```yaml
Learning Rate: 1.5e-4
LoRA Rank: 32
LoRA Alpha: 64
LoRA Dropout: 0.08
Target Modules: q_proj, k_proj, v_proj, o_proj
Batch Size: 8
Epochs: 4
Context Length: 1024
Optimizer: paged_adamw_8bit
Scheduler: Cosine
Weight Decay: 0.01
Warmup Ratio: 0.05
```
## Eğitim Sürecinin Grafikleri 

![Screen Shot 2025-12-08 at 21.32.43 PM](https://cdn-uploads.huggingface.co/production/uploads/6925861c23ddaaf1bc26fec9/HLk_1ZJqIwRWzuVTQqsW1.png)
![Screen Shot 2025-12-08 at 21.32.58 PM](https://cdn-uploads.huggingface.co/production/uploads/6925861c23ddaaf1bc26fec9/dFYi8K2NMQn22rtPBpPWh.png)
![Screen Shot 2025-12-08 at 21.33.28 PM](https://cdn-uploads.huggingface.co/production/uploads/6925861c23ddaaf1bc26fec9/-ef7WQ5YzSEau2WA_GoWj.png)

## Kullanım

### Basit Kullanım
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Model ve tokenizer'ı yükle
model = AutoModelForCausalLM.from_pretrained(
    "MehmetDORA/qwen2.5-coder-1.5b-deep-lora-merged-deneme3",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("MehmetDORA/qwen2.5-coder-1.5b-deep-lora-merged-deneme3")

# Kod üret
prompt = "Write a Python function to calculate the factorial of a number"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_length=512,
    temperature=0.7,
    top_p=0.95,
    do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

### System Prompt ile Kullanım
```python
messages = [
    {"role": "system", "content": "You are an expert Python programmer. Please read the problem carefully before writing any Python code."},
    {"role": "user", "content": "Write a function to check if a string is a palindrome"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## 📈 Evaluation Results

- **Validation Loss:** 0.963
- **Test Loss:** 0.XXX
- **Pass@1:** XX%

## 💾 Model Size

- **Parameters:** ~1.5B
- **Size:** ~3GB (FP16)

## ⚠️ Limitations

- Model, 1024 token context length ile eğitilmiştir
- Sadece Python kod üretimi için optimize edilmiştir
- Reasoning trace'leri içermez (sadece solution field kullanıldı)