File size: 1,708 Bytes
99d3fe6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | ---
language:
- en
- code
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- lora
- code
- qwen2.5-coder
- fingpt
- code-correction
pipeline_tag: text-generation
---
# fingpt-coder-1b5
LoRA adapter for **[Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)** fine-tuned on
[m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback)
(66K error→fix pairs, 3 epochs).
> **Adapter only** — the base model is loaded from the HF Hub automatically.
> Total download: ~84 MB adapter + ~3 GB base model.
---
## LoRA config
| Property | Value |
|----------|-------|
| Base model | `Qwen/Qwen2.5-Coder-1.5B-Instruct` |
| Rank (r) | 16 |
| Alpha | 32 (scale = 2.0) |
| Target modules | `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` |
| Training step | 48500 |
| Adapter size | ~84 MB |
---
## Quick start
```bash
git clone https://huggingface.co/revana/fingpt-coder-1b5
```
```python
import torch, sys
sys.path.insert(0, "fingpt") # fingpt repo root
from infer import load_model, generate
model, tokenizer = load_model("adapter_final.pt")
reply = generate(model, tokenizer, "Fix this bug:\n\ndef fact(n):\n return n * fact(n)")
print(reply)
```
Or use the [live demo](https://huggingface.co/spaces/revana/fingpt).
---
## Training
| Property | Value |
|----------|-------|
| Dataset | [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) |
| Samples | ~66K error→fix pairs |
| Epochs | 3 |
| Batch size | 4 × 4 grad accum = 16 effective |
| LR | 3e-4, cosine decay, 3% warmup |
| Precision | bfloat16 |
| Hardware | A100 80GB |
---
## License
Apache 2.0
|