File size: 2,936 Bytes
f10d882
90cbf63
 
 
 
 
 
 
 
 
 
f10d882
 
90cbf63
f10d882
90cbf63
 
 
f10d882
90cbf63
 
f10d882
90cbf63
f10d882
90cbf63
 
 
 
f10d882
90cbf63
 
f10d882
 
 
90cbf63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
base_model: microsoft/phi-2
tags:
  - sql
  - text-to-sql
  - lora
  - qlora
  - pytorch
license: mit
language:
  - en
---

# Phi-2 SQL LoRA (lr=2e-4)

Fine-tuned [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on
[b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
using QLoRA β€” achieving **76% exact match** on SQL generation, up from a 2% baseline.

This is **Run 1** (lr=2e-4) β€” the best performing run.
See also: [phi2-sql-lora-lr5e4](https://huggingface.co/antony-bryan-3D2Y/phi2-sql-lora-lr5e4) (lr=5e-4, 70% EM)

## Results

| Model | Exact Match | ROUGE-L | Ξ” vs Base |
|---|---|---|---|
| Phi-2 Base | 2.0% | 0.886 | β€” |
| **This model (lr=2e-4)** | **76.0%** | **0.9903** | **+74pp** |

Evaluated on 50 held-out samples from sql-create-context (seed=42).
Zero regressions β€” every query the base model got right, this model also got right.

## Training Details

| Parameter | Value |
|---|---|
| Method | QLoRA (4-bit NF4 + LoRA) |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Target modules | q_proj, v_proj |
| Dataset | 20,000 samples from sql-create-context |
| Epochs | 2 |
| Learning rate | 2e-4 |
| Effective batch size | 16 |
| Hardware | Kaggle T4 x2 |
| Training time | ~7 hours |

## Usage

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
import torch

model_name = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token

config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
config.__dict__['pad_token_id'] = tokenizer.pad_token_id

base = AutoModelForCausalLM.from_pretrained(
    model_name, config=config,
    dtype=torch.float16, device_map="auto", trust_remote_code=True
)
model = PeftModel.from_pretrained(base, "antony-bryan-3D2Y/phi2-sql-lora-lr2e4")
model.eval()

prompt = """### SQL Schema:
CREATE TABLE employees (id INT, name VARCHAR, department VARCHAR, salary INT)

### Question:
What are the names of employees in the engineering department?

### SQL Query:
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
    output = model.generate(**inputs, max_new_tokens=100, do_sample=False,
                            eos_token_id=tokenizer.eos_token_id)
n = inputs['input_ids'].shape[1]
result = tokenizer.decode(output[0][n:], skip_special_tokens=True)
result = result.replace("</s>", "").replace("<|endoftext|>", "").split('\n')[0].strip()
print(result)
# β†’ SELECT name FROM employees WHERE department = "engineering"
```

## Links
- πŸ““ Training notebook: [llm-finetune-eval](https://github.com/antony-bryan/llm-finetune-eval)
- πŸ“Š W&B training runs: [phi2-sql-finetune](https://wandb.ai/antonybryan2-00-anthropic/phi2-sql-finetune)
- πŸ”— Run 2 (lr=5e-4): [phi2-sql-lora-lr5e4](https://huggingface.co/antony-bryan-3D2Y/phi2-sql-lora-lr5e4)