|
|
--- |
|
|
pretty_name: J |
|
|
dataset_info: |
|
|
- config_name: Github_medium |
|
|
features: |
|
|
- name: json_schema |
|
|
dtype: string |
|
|
- name: unique_id |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_examples: 1 |
|
|
- name: val |
|
|
num_examples: 1 |
|
|
- name: test |
|
|
num_examples: 100 |
|
|
configs: |
|
|
- config_name: Github_medium |
|
|
data_files: |
|
|
- split: train |
|
|
path: Github_medium/train-* |
|
|
- split: val |
|
|
path: Github_medium/val-* |
|
|
- split: test |
|
|
path: Github_medium/test-* |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
--- |
|
|
|
|
|
This is a pruned eval dataset from [epfl-dlab/JSONSchemaBench](https://huggingface.co/datasets/epfl-dlab/JSONSchemaBench) for personal debugging purposes. |
|
|
|
|
|
Below is the original model card. |
|
|
|
|
|
*** |
|
|
# JSONSchemaBench |
|
|
|
|
|
[](https://arxiv.org/abs/2501.10868) |
|
|
[](https://github.com/guidance-ai/jsonschemabench) |
|
|
|
|
|
JSONSchemaBench is a benchmark of **real-world JSON schemas** designed to evaluate **structured output generation** for Large Language Models (LLMs). It contains approximately **10,000 JSON schemas**, capturing diverse constraints and complexities. |
|
|
|
|
|
|
|
|
```python |
|
|
import datasets |
|
|
from datasets import load_dataset |
|
|
|
|
|
def main(): |
|
|
# Inspect the available subsets of the dataset |
|
|
all_subsets = datasets.get_dataset_config_names("epfl-dlab/JSONSchemaBench") |
|
|
print("Available subsets:", all_subsets) |
|
|
# Example output: ['Github_easy', 'Github_hard', 'Github_medium', 'Github_trivial', 'Github_ultra', 'Glaiveai2K', 'JsonSchemaStore', 'Kubernetes', 'Snowplow', 'WashingtonPost', 'default'] |
|
|
|
|
|
# Access a specific subset of the dataset |
|
|
subset_name = "Github_easy" |
|
|
github_easy = load_dataset("epfl-dlab/JSONSchemaBench", subset_name) |
|
|
print(f"Loaded subset '{subset_name}':", github_easy) |
|
|
|
|
|
# Load the entire dataset as a whole |
|
|
entire_dataset = load_dataset("epfl-dlab/JSONSchemaBench", "default") |
|
|
print("Loaded entire dataset:", entire_dataset) |
|
|
|
|
|
if __name__ == "__main__": |
|
|
main() |
|
|
``` |
|
|
|
|
|
## Update (March 31st, 2025) |
|
|
|
|
|
To improve inference efficiency and streamline data collation, we’ve decided to drop a small number of exceptionally long samples from the dataset. |
|
|
|
|
|
We’re using the `meta-llama/Llama-3.2-1B-instruct` tokenizer, and the filtering criteria are as follows: |
|
|
- Github_easy: Samples longer than 1024 tokens — 5 out of 582 removed |
|
|
- Github_medium: Samples longer than 2048 tokens — 7 out of 593 removed |
|
|
- Github_hard: Samples longer than 8192 tokens — 4 out of 372 removed |
|
|
- Other subsets are not touched |
|
|
|
|
|
Since the number of discarded samples is minimal, this change is expected to have at most a 1% impact on results. |
|
|
|
|
|
|
|
|
## ⚠️ Important Update (March 10th, 2025) |
|
|
|
|
|
We have restructured the dataset to include train/val/test splits. If you downloaded the dataset before this date, you might encounter errors like `KeyError: 'Github_easy'`. |
|
|
|
|
|
To fix this issue, please follow one of the options below: |
|
|
|
|
|
1. Update How Subsets Are Accessed: |
|
|
If you previously used: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset |
|
|
|
|
|
subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench") |
|
|
subset["Github_easy"] |
|
|
``` |
|
|
You can update it to: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset |
|
|
|
|
|
subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench", name="Github_easy") |
|
|
subset: Dataset = concatenate_datasets([subset["train"], subset["val"], subset["test"]]) |
|
|
``` |
|
|
|
|
|
2. Load the Dataset in the Old Structure: |
|
|
If you need the previous structure, you can use a specific revision: |
|
|
|
|
|
```python |
|
|
dataset = load_dataset("epfl-dlab/JSONSchemaBench", revision="e2ee5fdba65657c60d3a24b321172eb7141f8d73") |
|
|
``` |
|
|
|
|
|
We apologize for the inconvenience and appreciate your understanding! 😊 |
|
|
|
|
|
## 📌 Dataset Overview |
|
|
- **Purpose:** Evaluate the **efficiency** and **coverage** of structured output generation. |
|
|
- **Sources:** GitHub, Kubernetes, API specifications, curated collections. |
|
|
- **Schemas:** Categorized based on complexity and domain. |
|
|
|
|
|
### 📊 Dataset Breakdown |
|
|
| Dataset | Category | Count | |
|
|
| --------------- | ------------------- | ----- | |
|
|
| GlaiveAI-2K | Function Call | 1707 | |
|
|
| Github-Trivial | Misc | 444 | |
|
|
| Github-Easy | Misc | 1943 | |
|
|
| Snowplow | Operational API | 403 | |
|
|
| Github-Medium | Misc | 1976 | |
|
|
| Kubernetes | Kubernetes API | 1064 | |
|
|
| Washington Post | Resource Access API | 125 | |
|
|
| Github-Hard | Misc | 1240 | |
|
|
| JSONSchemaStore | Misc | 492 | |
|
|
| Github-Ultra | Misc | 164 | |
|
|
| **Total** | | 9558 | |
|
|
|
|
|
## 📥 Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("epfl-dlab/JSONSchemaBench") |
|
|
print(dataset) |
|
|
``` |
|
|
|
|
|
## 🔍 Data Structure |
|
|
Each dataset split contains: |
|
|
- `"json_schema"`: The schema definition. |
|
|
- `"unique_id"`: A unique identifier for the schema. |
|
|
|
|
|
|
|
|
🚀 **For more details, check out the [paper](https://arxiv.org/abs/2501.10868).** |
|
|
|
|
|
## 📚 Citation |
|
|
```bibtex |
|
|
@misc{geng2025jsonschemabench, |
|
|
title={Generating Structured Outputs from Language Models: Benchmark and Studies}, |
|
|
author={Saibo Geng et al.}, |
|
|
year={2025}, |
|
|
eprint={2501.10868}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2501.10868} |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is provided under the [MIT License](https://opensource.org/licenses/MIT). Please ensure that you comply with the license terms when using or distributing this dataset. |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support. |
|
|
|