File size: 10,172 Bytes
de72aae b80c2c1 de72aae b80c2c1 de72aae b80c2c1 de72aae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- text-to-sql
- reinforcement-learning
---
# SLM-SQL: An Exploration of Small Language Models for Text-to-SQL
### Important Links
π[Arxiv Paper](https://arxiv.org/abs/2507.22478) |
π€[HuggingFace Collection](https://huggingface.co/collections/cycloneboy/slm-sql-688b02f99f958d7a417658dc) |
π€[ModelScope Collection](https://modelscope.cn/collections/SLM-SQL-624bb6a60e9643) |
π[GitHub Repository](https://github.com/CycloneBoy/slm_sql)
## News
+ `July 31, 2025`: Upload model to modelscope and huggingface.
+ `July 30, 2025`: Publish the paper to arxiv
## Introduction
> Large language models (LLMs) have demonstrated strong performance in translating natural language questions into SQL
> queries (Text-to-SQL). In contrast, small language models (SLMs) ranging from 0.5B to 1.5B parameters currently
> underperform on Text-to-SQL tasks due to their limited logical reasoning capabilities. However, SLMs offer inherent
> advantages in inference speed and suitability for edge deployment. To explore their potential in Text-to-SQL
> applications, we leverage recent advancements in post-training techniques. Specifically, we used the open-source
> SynSQL-2.5M dataset to construct two derived datasets: SynSQL-Think-916K for SQL generation and
> SynSQL-Merge-Think-310K
> for SQL merge revision. We then applied supervised fine-tuning and reinforcement learning-based post-training to the
> SLM, followed by inference using a corrective self-consistency approach. Experimental results validate the
> effectiveness
> and generalizability of our method, SLM-SQL. On the BIRD development set, the five evaluated models achieved an
> average
> improvement of 31.4 points. Notably, the 0.5B model reached 56.87\% execution accuracy (EX), while the 1.5B model
> achieved 67.08\% EX. We will release our dataset, model, and code to github: https://github.com/CycloneBoy/slm_sql.
### Framework
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_framework.png" height="500" alt="slmsql_framework">
### Main Results
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_bird_result.png" height="500" alt="slm_sql_result">
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_bird_main.png" height="500" alt="slmsql_bird_main">
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_spider_main.png" height="500" alt="slmsql_spider_main">
Performance Comparison of different Text-to-SQL methods on BIRD dev and test dataset.
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_ablation_study.png" height="300" alt="slmsql_ablation_study">
## Usage
This model can be loaded and used directly with the Hugging Face `transformers` library. Below is a basic example for Text-to-SQL generation.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load the tokenizer and model
model_name = "cycloneboy/SLM-SQL-0.5B" # You can replace with other models from the table below
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16)
# Example text-to-SQL query
# For Text-to-SQL, you might also need to provide schema information depending on the model's training.
prompt = "Give me the SQL query for customers who placed orders in New York."
# For chat models like Qwen2.5-Coder-0.5B-Instruct, it's often best to use the chat template:
messages = [
{"role": "user", "content": prompt}
]
chat_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize input
input_ids = tokenizer(chat_input, return_tensors="pt").input_ids.to(model.device)
# Generate SQL query
# Adjust generation parameters as needed. Common ones include max_new_tokens, do_sample, temperature, top_p, num_beams
generated_ids = model.generate(input_ids, max_new_tokens=100, num_beams=1, do_sample=False)
# Decode and print the generated SQL
# Set skip_special_tokens=True to remove special tokens from the output.
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(generated_text)
```
## Model
| **Model** | Base Model | Train Method | Modelscope | HuggingFace |
|------------------------------------------|------------------------------|--------------|---------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|
| SLM-SQL-Base-0.5B | Qwen2.5-Coder-0.5B-Instruct | SFT | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-0.5B) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-0.5B) |
| SLM-SQL-0.5B | Qwen2.5-Coder-0.5B-Instruct | SFT + GRPO | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-0.5B) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-0.5B) |
| CscSQL-Merge-Qwen2.5-Coder-0.5B-Instruct | Qwen2.5-Coder-0.5B-Instruct | SFT + GRPO | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-0.5B-Instruct) | [π€ HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-0.5B-Instruct) |
| SLM-SQL-Base-1.5B | Qwen2.5-Coder-1.5B-Instruct | SFT | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-1.5B) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-1.5B) |
| SLM-SQL-1.5B | Qwen2.5-Coder-1.5B-Instruct | SFT + GRPO | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-1.5B) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-1.5B) |
| CscSQL-Merge-Qwen2.5-Coder-1.5B-Instruct | Qwen2.5-Coder-1.5B-Instruct | SFT + GRPO | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-1.5B-Instruct) | [π€ HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-1.5B-Instruct) |
| SLM-SQL-Base-0.6B | Qwen3-0.6B | SFT | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-0.6B) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-0.6B) |
| SLM-SQL-0.6B | Qwen3-0.6B | SFT + GRPO | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-0.6B) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-0.6B) |
| SLM-SQL-Base-1.3B | deepseek-coder-1.3b-instruct | SFT | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-1.3B ) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-1.3B ) |
| SLM-SQL-1.3B | deepseek-coder-1.3b-instruct | SFT + GRPO | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-1.3B ) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-1.3B ) |
| SLM-SQL-Base-1B | Llama-3.2-1B-Instruct | SFT | [π€ Modelscope](https://modelscope.cn/models/cycloneboy/SLM-SQL-Base-1B ) | [π€ HuggingFace](https://huggingface.co/cycloneboy/SLM-SQL-Base-1B ) |
## Dataset
| **Dataset** | Modelscope | HuggingFace |
|----------------------------|------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
| SynsQL-Think-916k | [π€ Modelscope](https://modelscope.cn/datasets/cycloneboy/SynsQL-Think-916k) | [π€ HuggingFace](https://huggingface.co/datasets/cycloneboy/SynsQL-Think-916k) |
| SynsQL-Merge-Think-310k | [π€ Modelscope](https://modelscope.cn/datasets/cycloneboy/SynsQL-Merge-Think-310k) | [π€ HuggingFace](https://huggingface.co/datasets/cycloneboy/SynsQL-Merge-Think-310k) |
| bird train and dev dataset | [π€ Modelscope](https://modelscope.cn/datasets/cycloneboy/bird_train) | [π€ HuggingFace](https://huggingface.co/datasets/cycloneboy/bird_train) |
## TODO
- [ ] Release inference code
- [ ] Upload Model
- [ ] Release training code
- [ ] Fix bug
- [ ] Update doc
## Thanks to the following projects
- [csc_sql](https://github.com/CycloneBoy/csc_sql)
- [open-r1](https://github.com/huggingface/open-r1)
- [OmniSQL](https://github.com/RUCKBReasoning/OmniSQL)
## Citation
```bibtex
@misc{sheng2025slmsqlexplorationsmalllanguage,
title={SLM-SQL: An Exploration of Small Language Models for Text-to-SQL},
author={Lei Sheng and Shuai-Shuai Xu},
year={2025},
eprint={2507.22478},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.22478},
}
@misc{sheng2025cscsqlcorrectiveselfconsistencytexttosql,
title={CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning},
author={Lei Sheng and Shuai-Shuai Xu},
year={2025},
eprint={2505.13271},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.13271},
}
``` |