File size: 3,266 Bytes
4197eff
61de2ea
4197eff
61de2ea
 
 
 
 
 
 
 
 
 
4197eff
 
61de2ea
4197eff
61de2ea
4197eff
 
 
61de2ea
 
 
 
 
 
 
 
 
 
 
 
3bcec37
 
 
 
 
 
 
61de2ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f3a074f
61de2ea
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- lora
- code-generation
- fine-tuning
- competitive-programming
datasets:
- Naholav/CodeGen-Deep-5K
language:
- en
pipeline_tag: text-generation
---

# Deep Think - LoRA Fine-tuned Qwen2.5-Coder-1.5B

This is the best performing checkpoint from the **deep_think** training configuration.

## Model Details

| Property | Value |
|----------|-------|
| Base Model | Qwen/Qwen2.5-Coder-1.5B-Instruct |
| Training Dataset | [Naholav/CodeGen-Deep-5K](https://huggingface.co/datasets/Naholav/CodeGen-Deep-5K) |
| Training Method | LoRA (Low-Rank Adaptation) |
| Checkpoint | step-500, epoch-2 |
| Pass@1 (AtCoder Easy) | **31.71%** (13/41 problems) |

## Training Configuration

- **Prompt Style:** Think (uses `<think>` tags for reasoning)
- **System Prompt:** "You are an expert programmer. Use <think> tags for reasoning before writing code."
- **LoRA Rank:** 32
- **LoRA Alpha:** 64
- **LoRA Dropout:** 0.05
- **Learning Rate:** 5e-5


**Note:** All 4 models were trained with identical hyperparameters for fair comparison. Better configurations may be discovered through hyperparameter search methods (e.g., grid search, random search).

## All Models Performance Comparison

Evaluated on LiveCodeBench AtCoder Easy problems (41 questions):

| Model | Pass@1 | Improvement |
|-------|--------|-------------|
| Base Model (Qwen2.5-Coder-1.5B) | 24.39% | - |
| [deep-instruction](https://huggingface.co/Naholav/deep-instruction) | 26.83% | +10% |
| [diverse-think](https://huggingface.co/Naholav/diverse-think) | 29.27% | +20% |
| **[deep-think](https://huggingface.co/Naholav/deep-think) (this model)** | **31.71%** | **+30%** |
| [diverse-instruction](https://huggingface.co/Naholav/diverse-instruction) | 31.71% | +30% |

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-Coder-1.5B-Instruct",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Naholav/deep-think")

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")

# Generate with think prompt
messages = [
    {"role": "system", "content": "You are an expert programmer. Use <think> tags for reasoning before writing code."},
    {"role": "user", "content": "Your problem here..."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Resources

- **GitHub Repository:** [https://github.com/naholav/CodeGen](https://github.com/naholav/CodeGen)
- **Training Dataset:** [Naholav/CodeGen-Deep-5K](https://huggingface.co/datasets/Naholav/CodeGen-Deep-5K)

## Citation

If you use this model, please cite:

```
@misc{naholav2024codegen,
  author = {naholav},
  title = {CodeGen: LoRA Fine-tuning for Competitive Programming},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/Naholav/deep-think}
}
```