Casual-Autopsy commited on
Commit
f7f73c5
·
verified ·
1 Parent(s): 8b9a0b6

Upload folder using huggingface_hub

Browse files
Github_medium/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cab46951d0cbb48b6c6bce34ce5f76e6fa5d0f0042cf244c844c29e734db691d
3
+ size 81477
Github_medium/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:172c18081fcc85f22d98f518137603ea898dd0565db49b170a689cf2d752d5ad
3
+ size 4201
Github_medium/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d940f385d9234fe5e1cbf10a499884774e0f8a4f87527d34bdf20bd32c3c9bdc
3
+ size 11431
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: J
3
+ dataset_info:
4
+ - config_name: Github_medium
5
+ features:
6
+ - name: json_schema
7
+ dtype: string
8
+ - name: unique_id
9
+ dtype: string
10
+ splits:
11
+ - name: train
12
+ num_examples: 1
13
+ - name: val
14
+ num_examples: 1
15
+ - name: test
16
+ num_examples: 100
17
+ configs:
18
+ - config_name: Github_medium
19
+ data_files:
20
+ - split: train
21
+ path: Github_medium/train-*
22
+ - split: val
23
+ path: Github_medium/val-*
24
+ - split: test
25
+ path: Github_medium/test-*
26
+ license: mit
27
+ task_categories:
28
+ - text-generation
29
+ ---
30
+
31
+ This is a pruned eval dataset from [epfl-dlab/JSONSchemaBench](https://huggingface.co/datasets/epfl-dlab/JSONSchemaBench) for personal debugging purposes.
32
+
33
+ Below is the original model card.
34
+
35
+ ***
36
+ # JSONSchemaBench
37
+
38
+ [![Paper](https://img.shields.io/badge/Paper-arXiv-blue)](https://arxiv.org/abs/2501.10868)
39
+ [![GitHub](https://img.shields.io/badge/Code-GitHub-blue)](https://github.com/guidance-ai/jsonschemabench)
40
+
41
+ JSONSchemaBench is a benchmark of **real-world JSON schemas** designed to evaluate **structured output generation** for Large Language Models (LLMs). It contains approximately **10,000 JSON schemas**, capturing diverse constraints and complexities.
42
+
43
+
44
+ ```python
45
+ import datasets
46
+ from datasets import load_dataset
47
+
48
+ def main():
49
+ # Inspect the available subsets of the dataset
50
+ all_subsets = datasets.get_dataset_config_names("epfl-dlab/JSONSchemaBench")
51
+ print("Available subsets:", all_subsets)
52
+ # Example output: ['Github_easy', 'Github_hard', 'Github_medium', 'Github_trivial', 'Github_ultra', 'Glaiveai2K', 'JsonSchemaStore', 'Kubernetes', 'Snowplow', 'WashingtonPost', 'default']
53
+
54
+ # Access a specific subset of the dataset
55
+ subset_name = "Github_easy"
56
+ github_easy = load_dataset("epfl-dlab/JSONSchemaBench", subset_name)
57
+ print(f"Loaded subset '{subset_name}':", github_easy)
58
+
59
+ # Load the entire dataset as a whole
60
+ entire_dataset = load_dataset("epfl-dlab/JSONSchemaBench", "default")
61
+ print("Loaded entire dataset:", entire_dataset)
62
+
63
+ if __name__ == "__main__":
64
+ main()
65
+ ```
66
+
67
+ ## Update (March 31st, 2025)
68
+
69
+ To improve inference efficiency and streamline data collation, we’ve decided to drop a small number of exceptionally long samples from the dataset.
70
+
71
+ We’re using the `meta-llama/Llama-3.2-1B-instruct` tokenizer, and the filtering criteria are as follows:
72
+ - Github_easy: Samples longer than 1024 tokens — 5 out of 582 removed
73
+ - Github_medium: Samples longer than 2048 tokens — 7 out of 593 removed
74
+ - Github_hard: Samples longer than 8192 tokens — 4 out of 372 removed
75
+ - Other subsets are not touched
76
+
77
+ Since the number of discarded samples is minimal, this change is expected to have at most a 1% impact on results.
78
+
79
+
80
+ ## ⚠️ Important Update (March 10th, 2025)
81
+
82
+ We have restructured the dataset to include train/val/test splits. If you downloaded the dataset before this date, you might encounter errors like `KeyError: 'Github_easy'`.
83
+
84
+ To fix this issue, please follow one of the options below:
85
+
86
+ 1. Update How Subsets Are Accessed:
87
+ If you previously used:
88
+
89
+ ```python
90
+ from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset
91
+
92
+ subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench")
93
+ subset["Github_easy"]
94
+ ```
95
+ You can update it to:
96
+
97
+ ```python
98
+ from datasets import load_dataset, concatenate_datasets, DatasetDict, Dataset
99
+
100
+ subset: DatasetDict = load_dataset("epfl-dlab/JSONSchemaBench", name="Github_easy")
101
+ subset: Dataset = concatenate_datasets([subset["train"], subset["val"], subset["test"]])
102
+ ```
103
+
104
+ 2. Load the Dataset in the Old Structure:
105
+ If you need the previous structure, you can use a specific revision:
106
+
107
+ ```python
108
+ dataset = load_dataset("epfl-dlab/JSONSchemaBench", revision="e2ee5fdba65657c60d3a24b321172eb7141f8d73")
109
+ ```
110
+
111
+ We apologize for the inconvenience and appreciate your understanding! 😊
112
+
113
+ ## 📌 Dataset Overview
114
+ - **Purpose:** Evaluate the **efficiency** and **coverage** of structured output generation.
115
+ - **Sources:** GitHub, Kubernetes, API specifications, curated collections.
116
+ - **Schemas:** Categorized based on complexity and domain.
117
+
118
+ ### 📊 Dataset Breakdown
119
+ | Dataset | Category | Count |
120
+ | --------------- | ------------------- | ----- |
121
+ | GlaiveAI-2K | Function Call | 1707 |
122
+ | Github-Trivial | Misc | 444 |
123
+ | Github-Easy | Misc | 1943 |
124
+ | Snowplow | Operational API | 403 |
125
+ | Github-Medium | Misc | 1976 |
126
+ | Kubernetes | Kubernetes API | 1064 |
127
+ | Washington Post | Resource Access API | 125 |
128
+ | Github-Hard | Misc | 1240 |
129
+ | JSONSchemaStore | Misc | 492 |
130
+ | Github-Ultra | Misc | 164 |
131
+ | **Total** | | 9558 |
132
+
133
+ ## 📥 Loading the Dataset
134
+
135
+ ```python
136
+ from datasets import load_dataset
137
+
138
+ dataset = load_dataset("epfl-dlab/JSONSchemaBench")
139
+ print(dataset)
140
+ ```
141
+
142
+ ## 🔍 Data Structure
143
+ Each dataset split contains:
144
+ - `"json_schema"`: The schema definition.
145
+ - `"unique_id"`: A unique identifier for the schema.
146
+
147
+
148
+ 🚀 **For more details, check out the [paper](https://arxiv.org/abs/2501.10868).**
149
+
150
+ ## 📚 Citation
151
+ ```bibtex
152
+ @misc{geng2025jsonschemabench,
153
+ title={Generating Structured Outputs from Language Models: Benchmark and Studies},
154
+ author={Saibo Geng et al.},
155
+ year={2025},
156
+ eprint={2501.10868},
157
+ archivePrefix={arXiv},
158
+ primaryClass={cs.CL},
159
+ url={https://arxiv.org/abs/2501.10868}
160
+ }
161
+ ```
162
+
163
+
164
+ ## License
165
+
166
+ This dataset is provided under the [MIT License](https://opensource.org/licenses/MIT). Please ensure that you comply with the license terms when using or distributing this dataset.
167
+
168
+ ## Acknowledgements
169
+
170
+ We would like to thank the contributors and maintainers of the JSON schema projects and the open-source community for their invaluable work and support.