Datasets:
license: cc-by-sa-4.0
task_categories:
- text2text-generation
- table-question-answering
language:
- en
- sql
tags:
- text-to-sql
- sql
- spider
- flan-t5
- seq2seq
- nlp
size_categories:
- 1K<n<10K
SPIDER Text-to-SQL — Easy Access Version
A clean, HuggingFace-native version of the SPIDER Text-to-SQL benchmark. The original SPIDER dataset requires manually downloading a ZIP file from the Spider website. This version makes it instantly accessible via load_dataset.
What's Included
Each row contains the question, gold SQL, the database identifier, and a pre-parsed compact schema string — everything needed to train or evaluate a Text-to-SQL model without any additional preprocessing.
| Column | Description |
|---|---|
db_id |
Database identifier (e.g. "concert_singer") |
question |
Natural language question |
query |
Gold standard SQL answer |
db_schema |
Compact schema: `"table: col (type), col (type) |
question_toks |
Tokenized question words (list of strings) |
Splits
| Split | Source file | Examples |
|---|---|---|
| train | train_spider.json |
7,000 |
| test | train_others.json |
1,034 |
Note: Following standard SPIDER practice,
train_others.jsonis used as the held-out evaluation set. The original SPIDER test set is withheld for the official leaderboard.
Usage
from datasets import load_dataset
dataset = load_dataset("YOUR_USERNAME/spider-text2sql")
train = dataset["train"]
test = dataset["test"]
# Access fields
example = train[0]
print(example["question"]) # "How many heads of the departments are older than 56?"
print(example["query"]) # "SELECT count(*) FROM head WHERE age > 56"
print(example["db_id"]) # "department_management"
print(example["db_schema"]) # "department: Department_ID (number), ... | head: ..."
Schema Format
The db_schema column uses a compact linear format widely used in the Text-to-SQL literature:
table1: col1 (type), col2 (type), col3 (type) | table2: col4 (type), col5 (type)
This format is:
- Human-readable and model-friendly
- Fits within typical 512-token input limits for most seq2seq models
- Derived directly from the official SPIDER
tables.json
Fine-tuning Example (Flan-T5 prompt format)
This dataset pairs naturally with prompt-based fine-tuning:
def build_prompt(example):
return (
f"Translate to SQL: {example['question']}\n"
f"Database schema:\n{example['db_schema']}"
)
# example["query"] is the target output
Difference from Original SPIDER
| Original SPIDER | This Dataset | |
|---|---|---|
| Download method | Manual ZIP from website | load_dataset(...) ✅ |
| Schema included | Separate tables.json |
✅ Pre-joined per example |
Complex sql dict |
✅ Included | ❌ Omitted (noisy for most use cases) |
query_toks_no_value |
✅ Included | ❌ Omitted |
| Ready to train | Requires preprocessing | ✅ Yes |
Source & License
- Original dataset: SPIDER (Yu et al., 2018)
- License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0)
- This derived dataset is released under the same license.
Citation
@inproceedings{yu-etal-2018-spider,
title = "{S}pider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-{SQL} Task",
author = "Yu, Tao and others",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
year = "2018",
url = "https://aclanthology.org/D18-1425",
}