Datasets:
File size: 5,836 Bytes
f7f73c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
---
pretty_name: J
dataset_info:
- config_name: Github_medium
features:
- name: json_schema
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_examples: 1
- name: val
num_examples: 1
- name: test
num_examples: 100
configs:
- config_name: Github_medium
data_files:
- split: train
path: Github_medium/train-*
- split: val
path: Github_medium/val-*
- split: test
path: Github_medium/test-*
license: mit
task_categories:
- text-generation
---
This is a pruned eval dataset from [epfl-dlab/JSONSchemaBench](https://huggingface.co/datasets/epfl-dlab/JSONSchemaBench) for personal debugging purposes.
Below is the original model card.
***
# JSONSchemaBench
[](https://arxiv.org/abs/2501.10868)
[](https://github.com/guidance-ai/jsonschemabench)
JSONSchemaBench is a benchmark of **real-world JSON schemas** designed to evaluate **structured output generation** for Large Language Models (LLMs). It contains approximately **10,000 JSON schemas**, capturing diverse constraints and complexities.
```python
import datasets
from datasets import load_dataset
def main():
# Inspect the available subsets of the dataset
all_subsets = datasets.get_dataset_config_names("epfl-dlab/JSONSchemaBench")
print("Available subsets:", all_subsets)
# Example output: ['Github_easy', 'Github_hard', 'Github_medium', 'Github_trivial', 'Github_ultra', 'Glaiveai2K', 'JsonSchemaStore', 'Kubernetes', 'Snowplow', 'WashingtonPost', 'default']
# Access a specific subset of the dataset
subset_name = "Github_easy"
github_easy = load_dataset("epfl-dlab/JSONSchemaBench", subset_name)
print(f"Loaded subset '{subset_name}':", github_easy)
# Load the entire dataset as a whole
entire_dataset = load_dataset("epfl-dlab/JSONSchemaBench", "default")
print("Loaded entire dataset:", entire_dataset)
if __name__ == "__main__":
main()
```
## Update (March 31st, 2025)
To improve inference efficiency and streamline data collation, we’ve decided to drop a small number of exceptionally long samples from the dataset.
We’re using the `meta-llama/Llama-3.2-1B-instruct` tokenizer, and the filtering criteria are as follows:
- Github_easy: Samples longer than 1024 tokens — 5 out of 582 removed
- Github_medium: Samples longer than 2048 tokens — 7 out of 593 removed
- Github_hard: Samples longer than 8192 tokens — 4 out of 372 removed
- Other subsets are not touched
Since the number of discarded samples is minimal, this change is expected to have at most a 1% impact on results.
## ⚠️ Important Update (March 10th, 2025)
We have restructured the dataset to include train/val/test splits. If you downloaded the dataset before this date, you might encounter errors like `KeyError: 'Github_easy'`.
To fix this issue, please follow one of the options below:
1. Update How Subsets Are Accessed:
If you previously used:
```python
from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset
subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench")
subset["Github_easy"]
```
You can update it to:
```python
from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset
subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench", name="Github_easy")
subset: Dataset = concatenate_datasets([subset["train"], subset["val"], subset["test"]])
```
2. Load the Dataset in the Old Structure:
If you need the previous structure, you can use a specific revision:
```python
dataset = load_dataset("epfl-dlab/JSONSchemaBench", revision="e2ee5fdba65657c60d3a24b321172eb7141f8d73")
```
We apologize for the inconvenience and appreciate your understanding! 😊
## 📌 Dataset Overview
- **Purpose:** Evaluate the **efficiency** and **coverage** of structured output generation.
- **Sources:** GitHub, Kubernetes, API specifications, curated collections.
- **Schemas:** Categorized based on complexity and domain.
### 📊 Dataset Breakdown
| Dataset | Category | Count |
| --------------- | ------------------- | ----- |
| GlaiveAI-2K | Function Call | 1707 |
| Github-Trivial | Misc | 444 |
| Github-Easy | Misc | 1943 |
| Snowplow | Operational API | 403 |
| Github-Medium | Misc | 1976 |
| Kubernetes | Kubernetes API | 1064 |
| Washington Post | Resource Access API | 125 |
| Github-Hard | Misc | 1240 |
| JSONSchemaStore | Misc | 492 |
| Github-Ultra | Misc | 164 |
| **Total** | | 9558 |
## 📥 Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("epfl-dlab/JSONSchemaBench")
print(dataset)
```
## 🔍 Data Structure
Each dataset split contains:
- `"json_schema"`: The schema definition.
- `"unique_id"`: A unique identifier for the schema.
🚀 **For more details, check out the [paper](https://arxiv.org/abs/2501.10868).**
## 📚 Citation
```bibtex
@misc{geng2025jsonschemabench,
title={Generating Structured Outputs from Language Models: Benchmark and Studies},
author={Saibo Geng et al.},
year={2025},
eprint={2501.10868},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.10868}
}
```
## License
This dataset is provided under the [MIT License](https://opensource.org/licenses/MIT). Please ensure that you comply with the license terms when using or distributing this dataset.
## Acknowledgements
We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support.
|