Datasets:
File size: 1,724 Bytes
74c4488 4ae3713 bd2fc4a d429b24 bd2fc4a 6109157 1555853 74c4488 d3916b5 6a67344 d3916b5 2aa2053 d3916b5 bd2fc4a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
datasets:
- llmsql-bench/llmsql-benchmark
tags:
- text-to-sql
- benchmark
- evaluation
license: mit
language:
- en
bibtex:
- >-
@article{pihulski2025llmsql, title={LLMSQL: Upgrading WikiSQL for the LLM Era
of Text-to-SQL}, author={Dzmitry Pihulski and Karol Charchut and Viktoria
Novogrodskaia and Jan Kocoń}, journal={arXiv preprint arXiv:2510.02350},
year={2025}, url={https://arxiv.org/abs/2510.02350} }
task_categories:
- question-answering
- text-generation
pretty_name: LLMSQL Benchmark
size_categories:
- 10K<n<100K
repository: https://github.com/LLMSQL/llmsql-benchmark
new_version: llmsql-bench/llmsql-2.0
---
# LLMSQL Benchmark
This benchmark is designed to evaluate text-to-SQL models. For usage of this benchmark see `https://github.com/LLMSQL/llmsql-benchmark`.
Arxiv Article: https://arxiv.org/abs/2510.02350
## Files
- `tables.jsonl` — Database table metadata
- `questions.jsonl` — All available questions
- `train_questions.jsonl`, `val_questions.jsonl`, `test_questions.jsonl` — Data splits for finetuning, see `https://github.com/LLMSQL/llmsql-benchmark`
- `sqlite_tables.db` — sqlite db with tables from `tables.jsonl`, created with the help of `create_db_sql`.
- `create_db.sql` — SQL script that creates the database `sqlite_tables.db`.
`test_output.jsonl` is **not included** in the dataset.
## Citation
If you use this benchmark, please cite:
```
@inproceedings{llmsql_bench,
title={LLMSQL: Upgrading WikiSQL for the LLM Era of Text-to-SQLels},
author={Pihulski, Dzmitry and Charchut, Karol and Novogrodskaia, Viktoria and Koco{'n}, Jan},
booktitle={2025 IEEE International Conference on Data Mining Workshops (ICDMW)},
year={2025},
organization={IEEE}
}
``` |