File size: 4,122 Bytes
df130bc
 
 
 
 
 
 
 
909715e
 
df130bc
 
909715e
df130bc
909715e
df130bc
909715e
df130bc
909715e
 
df130bc
909715e
df130bc
909715e
df130bc
 
 
909715e
df130bc
 
909715e
df130bc
909715e
 
 
 
 
 
df130bc
909715e
df130bc
909715e
 
 
 
 
 
 
 
df130bc
 
 
909715e
 
df130bc
 
 
909715e
df130bc
 
 
 
 
909715e
df130bc
 
 
 
909715e
 
df130bc
909715e
 
df130bc
909715e
df130bc
 
 
 
909715e
 
 
 
 
 
df130bc
909715e
 
 
 
 
 
 
df130bc
 
 
909715e
 
 
df130bc
909715e
 
 
 
 
 
 
 
 
df130bc
909715e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df130bc
 
909715e
 
 
df130bc
909715e
df130bc
 
 
 
 
909715e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- qwen2.5
- lora
- text-to-sql
- sql
- peft
library_name: peft
---

# Qwen2.5-7B LoRA Fine-tuned for Text-to-SQL

This is a **LoRA adapter** for [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) fine-tuned on natural language to SQL conversion.

## Quick Links

- 🔗 **Merged Model (Ready-to-use):** [vindows/qwen2.5-7b-text-to-sql-merged](https://huggingface.co/vindows/qwen2.5-7b-text-to-sql-merged)
- 🖥️ **GGUF for CPU Inference:** [vindows/qwen2.5-7b-text-to-sql-gguf](https://huggingface.co/vindows/qwen2.5-7b-text-to-sql-gguf)

## Model Performance

### Training Metrics (54 test examples)

| Metric | Base Model | Fine-tuned | Improvement |
|--------|-----------|------------|-------------|
| Loss | 2.1301 | 0.4098 | 80.76% ⬆️ |
| Perplexity | 8.4155 | 1.5064 | 82.10% ⬆️ |

### Spider Benchmark Results (200 examples)

| Metric | Score |
|--------|-------|
| Exact Match | 0.00% |
| Normalized Match | 0.50% |
| Component Accuracy | 92.60% |
| Average Similarity | 25.47% |

**Note:** The model shows strong component understanding but tends to append explanatory text after SQL queries, affecting exact match scores. See limitations below.

## Training Details

- **Training Time:** 6 minutes 15 seconds
- **Epochs:** 3
- **LoRA Rank:** 16
- **LoRA Alpha:** 32
- **Learning Rate:** 2e-4
- **Dataset:** 425 training examples, 54 validation, 54 test

## Usage

### With PEFT (Recommended for fine-tuning/adapters)

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-7B-Instruct",
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "vindows/qwen2.5-7b-text-to-sql")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct", trust_remote_code=True)

# Generate SQL
prompt = "Convert the following natural language question to SQL:\n\nDatabase: concert_singer\nQuestion: How many singers do we have?\n\nSQL:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128, temperature=0.1)
sql = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(sql)
```

### Using the Merged Model (Easier)

For easier usage without loading base + adapter separately, use the merged model:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "vindows/qwen2.5-7b-text-to-sql-merged",
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("vindows/qwen2.5-7b-text-to-sql-merged")
```

## Limitations

1. **Appends Explanatory Text:** The model tends to append explanatory text or context after generating SQL queries. Post-processing to extract only the SQL statement is recommended.
2. **Hallucinated Table Names (0.5B):** The smaller 0.5B model sometimes invents table names not present in the schema.
3. **Training Data Distribution:** Best performance on queries similar to training examples.

## Recommended Post-Processing

```python
def extract_sql(generated_text):
    # Extract SQL after the "SQL:" marker
    if "SQL:" in generated_text:
        sql = generated_text.split("SQL:")[-1].strip()
    else:
        sql = generated_text

    # Take only the first SQL statement (before extra text)
    if '\n\n' in sql:
        sql = sql.split('\n\n')[0].strip()

    # Remove trailing semicolon if present
    sql = sql.rstrip(';').strip()

    return sql
```

## Files Included

- `adapter_config.json` - LoRA configuration
- `adapter_model.safetensors` - LoRA weights
- `README.md` - This file

## Citation

```bibtex
@misc{qwen2.5-7b-text-to-sql,
  title = {Qwen2.5-7B LoRA Fine-tuned for Text-to-SQL},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/vindows/qwen2.5-7b-text-to-sql}
}
```

## License

Apache 2.0 (inherits from base Qwen2.5 model)