File size: 6,169 Bytes
6432a90 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 | ---
license: cc-by-sa-4.0
language:
- en
tags:
- text-to-sql
- bird
- spider
- finer-sql
- training-data
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: bird_train_no_gen_table.tar.gz
---
# FINER-SQL — Training Resources Bundle
Convenience bundle of all the data assets needed to **train and evaluate**
[FINER-SQL](https://github.com/thanhdath/finer-sql) on BIRD-bench. Companion to
the [`thanhdath/FINER-SQL-3B-BIRD`](https://huggingface.co/thanhdath/FINER-SQL-3B-BIRD)
and [`thanhdath/FINER-SQL-3B-BIRD-no-gen`](https://huggingface.co/thanhdath/FINER-SQL-3B-BIRD-no-gen)
model cards.
> ⚠️ The training pipeline — single-GPU continual GRPO from
> `FINER-SQL-3B-BIRD` to a no-gen specialist — is documented in
> [`TRAIN_3B_BIRD_NO_GEN.md`](https://github.com/thanhdath/finer-sql/blob/dev/TRAIN_3B_BIRD_NO_GEN.md).
> This dataset gives you everything in §4 of that guide in one place.
## Files
| File | Size (compressed) | Size (extracted) | What is it |
|---|---|---|---|
| `bird_dev.tar.gz` | ~1.0 GB | ~3.5 GB | BIRD dev release: `dev_databases/`, `dev_gold.sql`, `dev.json`. Required by the official BIRD evaluator (`evaluation_bird_ex.py`) and by the SQL execution sandbox. |
| `bird_train.tar.gz` | ~10 GB | ~40 GB | BIRD train databases (`train_databases/`). Required for GRPO reward — the trainer executes both candidate and gold SQLs against these SQLites. |
| `bird_train_no_gen_table.tar.gz` | 3.4 MB | 60 MB | HuggingFace `Dataset` arrow file with **9 428 BIRD train prompts in vanilla / no-gen-table format** (top-30 GRAST columns + raw schema, no LLM-generated meanings). The training set used for the no-gen specialist. |
| `gt_rows_cache.pkl.gz` | 17 MB | 76 MB | Pickled `{(dataset, db_id, gold_sql): rows}` cache of executed gold SQLs for both BIRD train and dev. Speeds up the first 1–2 epochs of GRPO reward computation by 5–10× (no need to re-execute every gold). |
## Quick download (everything)
```bash
# Bulk download
huggingface-cli download thanhdath/finer-sql-training-bundle \
--repo-type dataset \
--local-dir ~/finer-sql-data --local-dir-use-symlinks False
# Layout it into the paths the training scripts expect
cd ~/finer-sql-data
mkdir -p ~/data/bird ~/data/grast-sql-data/data-train
tar xf bird_dev.tar.gz -C ~/data/bird/ # → dev/dev_databases, dev_gold.sql, dev.json
mkdir -p ~/data/bird/dev && mv ~/data/bird/dev_* ~/data/bird/dev/ 2>/dev/null || true
mkdir -p ~/data/bird/train && tar xf bird_train.tar.gz -C ~/data/bird/train/
tar xf bird_train_no_gen_table.tar.gz -C ~/data/grast-sql-data/data-train/
gunzip -c gt_rows_cache.pkl.gz > ~/data/gt_rows_cache.pkl
```
After this, the canonical paths used by `train_bird_no_gen_table_v2.sh`,
`eval_final_3b_bird.sh`, and `reproduce.py` are populated:
```
~/data/bird/dev/dev_databases/ ← BIRD_DB_ROOT
~/data/bird/dev/dev_gold.sql ← BIRD_GOLD
~/data/bird/dev/dev.json ← BIRD_DIFF
~/data/bird/train/train_databases/ ← used by db_execution/api.py
~/data/grast-sql-data/data-train/grpo_sql_writer_bird_train_no_gen_table/
~/data/gt_rows_cache.pkl
```
## Selective download (just what you need)
```python
from huggingface_hub import hf_hub_download
# Only the no-gen training arrow (60 MB extracted) — for re-running GRPO
hf_hub_download("thanhdath/finer-sql-training-bundle",
"bird_train_no_gen_table.tar.gz", repo_type="dataset",
local_dir="~/finer-sql-data")
# Only the GT cache (76 MB extracted) — speeds up reward calc
hf_hub_download("thanhdath/finer-sql-training-bundle",
"gt_rows_cache.pkl.gz", repo_type="dataset",
local_dir="~/finer-sql-data")
# Only the BIRD dev (3.5 GB extracted) — for evaluation
hf_hub_download("thanhdath/finer-sql-training-bundle",
"bird_dev.tar.gz", repo_type="dataset",
local_dir="~/finer-sql-data")
```
## Provenance
- **`bird_dev.tar.gz`** and **`bird_train.tar.gz`** are repackaged from the
public [BIRD-bench](https://bird-bench.github.io/) dev/train releases. The
archives are byte-identical to extracting the upstream zips. Original license
applies.
- **`bird_train_no_gen_table.tar.gz`** is generated by the [GRAST-SQL](https://github.com/thanhdath/grast-sql)
schema-linker pipeline on top of the BIRD train split. The `messages`
column renders the chat template; `groundtruth_sqls` carries the (multiple)
acceptable golds per question.
- **`gt_rows_cache.pkl.gz`** is built from BIRD train + dev gold SQLs by
[`build_gt_cache.py`](https://github.com/thanhdath/finer-sql/blob/dev/build_gt_cache.py)
(no human labour beyond the upstream gold SQLs).
## Reproducing FINER-SQL with this bundle
```bash
git clone https://github.com/thanhdath/finer-sql.git && cd finer-sql
export BIRD_DB_ROOT=~/data/bird/dev/dev_databases/
export BIRD_GOLD=~/data/bird/dev/dev_gold.sql
export BIRD_DIFF=~/data/bird/dev/dev.json
# Stand up the SQL executor sandbox (point it at ~/data/bird/{train,dev})
cd db_execution && uvicorn api:app --host 0.0.0.0 --port 8001 --workers 8 &
cd ..
# Continual GRPO from the joint BIRD+Spider checkpoint → no-gen specialist
bash train_bird_no_gen_table_v2.sh
# Evaluate every saved checkpoint
for s in 20 40 60 80 100; do
bash eval_final_3b_bird.sh \
output/grpo_bird_3b_no_gen_table_v2/checkpoint-$s \
~/data/grast-sql-data/data-train/.../bird_dev_top30_prompts_v2_no_gen_table \
no_gen_step_$s 0
done
```
## Citation
```bibtex
@article{finer-sql-2026,
title = {FINER-SQL: Fine-grained reasoning rewards for small Text-to-SQL models},
author = {Thanh Dat and others},
year = {2026},
}
```
BIRD-bench:
```bibtex
@inproceedings{li2023bird,
title = {{Can LLM Already Serve as a Database Interface? A {BIG} Bench for Large-Scale Database Grounded Text-to-SQLs}},
author = {Li, Jinyang and Hui, Binyuan and Qu, Ge and Yang, Jiaxi and Li, Binhua and Li, Bowen and Wang, Bailin and Qin, Bowen and Cao, Ruiying and others},
booktitle = {NeurIPS},
year = {2023}
}
```
|