File size: 2,392 Bytes
ffaf7c7
2aadf1b
 
 
 
 
 
 
 
 
ffaf7c7
 
2aadf1b
ffaf7c7
2aadf1b
ffaf7c7
2aadf1b
 
 
 
ffaf7c7
 
2aadf1b
 
 
 
 
ffaf7c7
 
2aadf1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
language: en
license: mit
tags:
- code-generation
- python
- lora
- peft
- causal-lm
base_model: microsoft/CodeGPT-small-py
---

# CodeGPT LoRA Fine-tuned for Code Generation

Fine-tuned version of CodeGPT using LoRA (Low-Rank Adaptation) for Python code generation.

## 🔗 Links
- **Live Demo:** https://huggingface.co/spaces/Pradnya27/code-generator
- **Full Fine-tuned Model:** https://huggingface.co/Pradnya27/codegpt-finetuned-code-generation
- **GitHub:** https://github.com/pradnyagundu/codegpt-finetuned-code-generation

## Model Details
- **Base model:** microsoft/CodeGPT-small-py (124M parameters)
- **Method:** LoRA (Low-Rank Adaptation)
- **Trainable parameters:** 589,824 (0.36% of total)
- **Model size:** 2.36MB (vs 651MB for full fine-tuning)
- **Dataset:** Rabinovich/Code-Generation-LLM-LoRA (5000 examples)

## Training Details
- **Epochs:** 2
- **Learning rate:** 3e-4
- **Batch size:** 8
- **LoRA rank:** 16
- **LoRA alpha:** 32
- **Hardware:** Google Colab T4 GPU
- **Training time:** ~9 minutes

## Training Results
| Step | Loss |
|------|------|
| 100 | 4.28 |
| 300 | 3.45 |
| 500 | 3.28 |
| 700 | 3.24 |
| 900 | 3.15 |
| 1200 | 3.14 |

## Comparison vs Full Fine-tuning
| | Full Fine-tune | LoRA |
|---|---|---|
| Final loss | 2.31 | 3.14 |
| Model size | 651MB | 2.36MB |
| Training time | ~14 min | ~9 min |
| Trainable params | 124M | 589K |

## How to Use
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

base_model = AutoModelForCausalLM.from_pretrained("microsoft/CodeGPT-small-py")
model = PeftModel.from_pretrained(base_model, "Pradnya27/codegpt-lora-code-generation")
tokenizer = AutoTokenizer.from_pretrained("microsoft/CodeGPT-small-py")
tokenizer.pad_token = tokenizer.eos_token

prompt = "Generate code: Write a function to check if a number is prime"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
    outputs = model.generate(inputs["input_ids"], max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Limitations
- Trained on competitive programming problems — works best for algorithmic tasks
- Small base model (124M params) limits output quality
- Full fine-tuning achieves lower loss on this dataset

## Future Work
- Train on full 34K dataset
- Increase LoRA rank to r=32 or r=64
- Evaluate on HumanEval benchmark