File size: 2,536 Bytes
252fd69
c6c44a0
 
8399a3d
c6c44a0
 
 
 
 
 
 
 
 
 
 
 
252fd69
 
c6c44a0
252fd69
c6c44a0
 
252fd69
c6c44a0
252fd69
c6c44a0
252fd69
c6c44a0
 
 
 
252fd69
 
 
c6c44a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
license: apache-2.0
language:
- en
tags:
- sql
- code-generation
- text-to-sql
- phi-3
- lora
- qlora
- fine-tuned
- peft
pipeline_tag: text-generation
---

# Phi-3 Mini SQL Generator (QLoRA Fine-tuned)

Fine-tuned version of [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
for **natural language → SQL** generation using QLoRA on a T4 GPU (Google Colab, ~20 min).

## Evaluation — Base vs Fine-tuned

Evaluated on 200 held-out examples from [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context).

| Model | Exact Match |
|---|---|
| Phi-3-mini-4k-instruct (base) | 2.0% |
| **This adapter (fine-tuned)** | **73.5%** |

## Training Details

- **Dataset:** b-mc2/sql-create-context — 1,000 train / 200 validation examples
- **Epochs:** 3
- **Effective batch size:** 8
- **Learning rate:** 0.0002
- **Hardware:** NVIDIA T4 (Google Colab free tier)
- **Training time:** 21.2 min
- **Final train loss:** 0.6526
- **Best checkpoint:** step 250 (by eval loss)

## LoRA Config

| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target modules | ['down_proj', 'qkv_proj', 'gate_up_proj', 'o_proj'] |
| Quantization | 4-bit NF4 (QLoRA) |

## How to Use

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

tokenizer = AutoTokenizer.from_pretrained("Shizu0n/phi3-mini-sql-generator", trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
    "microsoft/Phi-3-mini-4k-instruct",
    torch_dtype=torch.float16, device_map="auto", trust_remote_code=True,
)
model = PeftModel.from_pretrained(base_model, "Shizu0n/phi3-mini-sql-generator")
model.eval()

prompt = "Given the following SQL table, write a SQL query.\n\n"\
         "Table: employees (id, name, department, salary)\n\n"\
         "Question: What is the average salary per department?\n\nSQL:"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
    outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
prompt_len = inputs["input_ids"].shape[-1]
print(tokenizer.decode(outputs[0][prompt_len:], skip_special_tokens=True))
```

## Limitations

- Fine-tuned on 1,000 examples — best suited for simple to medium complexity SELECT queries
- Not tested on dialect-specific SQL (PostgreSQL/MySQL-specific functions)
- May struggle with multi-table JOINs and nested subqueries