vincentkoc commited on
Commit
40e2738
·
verified ·
1 Parent(s): c16b5fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -22
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: apache-2.0
3
  language:
4
  - en
5
- pretty_name: Tiny QA Evaluation Dataset
6
  size_categories:
7
  - n<1K
8
  tags:
@@ -10,6 +10,7 @@ tags:
10
  - evaluation
11
  - benchmark
12
  - toy-dataset
 
13
  task_categories:
14
  - question-answering
15
  task_ids:
@@ -17,18 +18,26 @@ task_ids:
17
  - closed-book-qa
18
  ---
19
 
20
- # Tiny QA Evaluation Dataset
21
 
22
- A very small, general-knowledge QA set (52 examples) for quick sanity checks, pipeline smoke-tests, and demoing LLM evaluation workflows. Question–answer pairs covering geography, history, math, science, literature, and more. Each example includes:
 
 
 
 
 
 
 
 
23
 
24
  - **text**: the question prompt
25
  - **label**: the “gold” answer
26
  - **metadata.context**: a one-sentence fact
27
  - **tags**: additional annotations (`category`, `difficulty`)
28
 
29
- It’s intentionally tiny (<100 KB, under 1 K examples) so you can iterate on data loading, evaluation scripts, or CI steps in under a second.
30
 
31
- ## Supported Tasks and Formats
32
 
33
  - **Tasks**:
34
  - Extractive QA
@@ -37,7 +46,7 @@ It’s intentionally tiny (<100 KB, under 1 K examples) so you can iterate on da
37
  - **Splits**:
38
  - `train` (all 52 examples)
39
 
40
- ## Languages
41
 
42
  - English (`en`)
43
 
@@ -45,7 +54,7 @@ It’s intentionally tiny (<100 KB, under 1 K examples) so you can iterate on da
45
 
46
  ### Data Fields
47
 
48
- Each example in `data/train.json` has:
49
 
50
  | field | type | description |
51
  |---------------------|--------|----------------------------------------------|
@@ -81,43 +90,47 @@ Each example in `data/train.json` has:
81
  "category": "math",
82
  "difficulty": "easy"
83
  }
84
- },
 
85
  ```
 
86
 
87
  ## Data Splits
88
 
89
- Only one split:
90
 
91
- - **train**: 52 examples, used for development and quick evaluation.
92
 
93
  ## Data Creation
94
 
95
  ### Curation Rationale
96
 
97
- Tiny QA Eval” exists to:
98
 
99
  1. Smoke-test QA pipelines (loading, preprocessing, evaluation).
100
  2. Demo Hugging Face Datasets integration in tutorials.
101
  3. Verify model–eval loops run without downloading large corpora.
 
102
 
103
  ### Source Data
104
 
105
  Hand-crafted by the dataset creator from well-known, public-domain facts.
106
- Was developed as a dataset for sample projects to demonstrate [Opik](https://github.com/comet-ml/opik).
107
 
108
  ### Annotations
109
 
110
- Self-annotated. Each `metadata.context` and `tags` field is manually created.
111
 
112
  ## Usage
113
 
114
- Load with:
115
 
116
  ```python
117
  from datasets import load_dataset
118
 
119
  ds = load_dataset("vincentkoc/tiny_qa_benchmark")
120
  print(ds["train"][0])
 
121
  # {
122
  # "text": "What is the capital of France?",
123
  # "label": "Paris",
@@ -130,28 +143,44 @@ print(ds["train"][0])
130
  # }
131
  # }
132
  ```
 
133
 
134
- ## Considerations
135
 
136
- - **Not a benchmark**: Too few examples for statistical significance.
137
- - **Do not train**: Use only for smoke-tests or demos.
138
- - **No sensitive data**: All facts are public domain.
 
139
 
140
  ## Licensing
141
 
142
- Apache-2.0. See [LICENSE](LICENSE) for details.
143
 
144
  ## Citation
145
 
146
- If you use this dataset, please cite:
147
 
148
  ```bibtex
149
- @misc{koctinyqabenchmark,
150
  author = { Vincent Koc },
151
- title = { tiny_qa_benchmark (Revision ff9143f) },
152
  year = 2025,
153
  url = { https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark },
154
  doi = { 10.57967/hf/5417 },
155
  publisher = { Hugging Face }
156
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
  ```
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ pretty_name: Tiny QA Benchmark (Original EN Core for TQB++)
6
  size_categories:
7
  - n<1K
8
  tags:
 
10
  - evaluation
11
  - benchmark
12
  - toy-dataset
13
+ - tqb++-core
14
  task_categories:
15
  - question-answering
16
  task_ids:
 
18
  - closed-book-qa
19
  ---
20
 
21
+ # Tiny QA Benchmark (Original English Core for TQB++)
22
 
23
+ **This dataset (`vincentkoc/tiny_qa_benchmark`) is the original 52-item English Question-Answering set. It now serves as the immutable "gold standard" core for the expanded [Tiny QA Benchmark++ (TQB++)](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp) project.**
24
+
25
+ The TQB++ project builds upon this core dataset by introducing a powerful synthetic generation toolkit, pre-built multilingual datasets, and a comprehensive framework for rapid LLM smoke testing.
26
+
27
+ **For the full TQB++ toolkit, the latest research paper, multilingual datasets, and the synthetic generator, please visit:**
28
+ * **TQB++ Hugging Face Dataset Collection & Toolkit:** [vincentkoc/tiny_qa_benchmark_pp](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp)
29
+ * **TQB++ GitHub Repository (Code, Paper & Toolkit):** [vincentkoc/tiny_qa_benchmark_pp](https://github.com/vincentkoc/tiny_qa_benchmark_pp)
30
+
31
+ This original dataset (`vincentkoc/tiny_qa_benchmark`) contains 52 hand-crafted general-knowledge QA pairs covering geography, history, math, science, literature, and more. It remains ideal for quick sanity checks, pipeline smoke-tests, and as a foundational component of TQB++. Each example includes:
32
 
33
  - **text**: the question prompt
34
  - **label**: the “gold” answer
35
  - **metadata.context**: a one-sentence fact
36
  - **tags**: additional annotations (`category`, `difficulty`)
37
 
38
+ It’s intentionally tiny (<100 KB) so you can iterate on data loading, evaluation scripts, or CI steps in under a second using these specific 52 items.
39
 
40
+ ## Supported Tasks and Formats (for this core dataset)
41
 
42
  - **Tasks**:
43
  - Extractive QA
 
46
  - **Splits**:
47
  - `train` (all 52 examples)
48
 
49
+ ## Languages (for this core dataset)
50
 
51
  - English (`en`)
52
 
 
54
 
55
  ### Data Fields
56
 
57
+ Each example in `data/train.json` (as loaded by `datasets`) has:
58
 
59
  | field | type | description |
60
  |---------------------|--------|----------------------------------------------|
 
90
  "category": "math",
91
  "difficulty": "easy"
92
  }
93
+ }
94
+ ]
95
  ```
96
+ *(Note: The actual file on the Hub might be a `.jsonl` file where each line is a JSON object, but `load_dataset` handles this.)*
97
 
98
  ## Data Splits
99
 
100
+ Only one split for this core dataset:
101
 
102
+ - **train**: 52 examples, used for development, quick evaluation, and as the TQB++ core.
103
 
104
  ## Data Creation
105
 
106
  ### Curation Rationale
107
 
108
+ The "Tiny QA Benchmark" (this 52-item set) was originally created to:
109
 
110
  1. Smoke-test QA pipelines (loading, preprocessing, evaluation).
111
  2. Demo Hugging Face Datasets integration in tutorials.
112
  3. Verify model–eval loops run without downloading large corpora.
113
+ 4. **Serve as the immutable "gold standard" English core for the [Tiny QA Benchmark++ (TQB++)](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp) project.**
114
 
115
  ### Source Data
116
 
117
  Hand-crafted by the dataset creator from well-known, public-domain facts.
118
+ It was initially developed as a dataset for sample projects to demonstrate [Opik](https://github.com/comet-ml/opik/) and now forms the foundational English core of TQB++.
119
 
120
  ### Annotations
121
 
122
+ Self-annotated. Each `metadata.context` and `tags` field is manually created for these 52 items.
123
 
124
  ## Usage
125
 
126
+ Load this specific 52-item core dataset with:
127
 
128
  ```python
129
  from datasets import load_dataset
130
 
131
  ds = load_dataset("vincentkoc/tiny_qa_benchmark")
132
  print(ds["train"][0])
133
+ # Expected output:
134
  # {
135
  # "text": "What is the capital of France?",
136
  # "label": "Paris",
 
143
  # }
144
  # }
145
  ```
146
+ For accessing the full TQB++ suite, including multilingual packs and the synthetic generator, refer to the [TQB++ Hugging Face Dataset Collection](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp).
147
 
148
+ ## Considerations for Use
149
 
150
+ * **Immutable Core for TQB++:** This dataset is the stable, hand-curated English core of the TQB++ project. Its 52 items are not intended to change.
151
+ * **Not a Comprehensive Benchmark (on its own):** While excellent for quick checks, these 52 items are too few for statistically significant model ranking. For broader evaluation, use in conjunction with the TQB++ synthetic generator and its multilingual capabilities found at [vincentkoc/tiny_qa_benchmark_pp](https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp).
152
+ * **Do Not Train:** Primarily intended for evaluation, smoke-tests, or demos.
153
+ * **No Sensitive Data:** All facts are public domain.
154
 
155
  ## Licensing
156
 
157
+ Apache-2.0. See the `LICENSE` file in the [TQB++ GitHub repository](https://github.com/vincentkoc/tiny_qa_benchmark_pp) for details (as this dataset is now part of that larger project).
158
 
159
  ## Citation
160
 
161
+ If you use this specific 52-item core English dataset, please cite it. You can use the following BibTeX entry, which has been updated to reflect its role:
162
 
163
  ```bibtex
164
+ @misc{koctinyqabenchmark_original_core,
165
  author = { Vincent Koc },
166
+ title = { Tiny QA Benchmark (Original 52-item English Core for TQB++) },
167
  year = 2025,
168
  url = { https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark },
169
  doi = { 10.57967/hf/5417 },
170
  publisher = { Hugging Face }
171
  }
172
+ ```
173
+
174
+ For the complete **Tiny QA Benchmark++ (TQB++)** project (which includes this core set, the synthetic generator, multilingual packs, and the associated research paper), please refer to and cite the TQB++ project directly:
175
+
176
+ ```bibtex
177
+ @misc{koctinyqabenchmark_pp_dataset,
178
+ author = {Vincent Koc},
179
+ title = {Tiny QA Benchmark++ (TQB++) Datasets and Toolkit},
180
+ year = {2025},
181
+ publisher = {Hugging Face & GitHub},
182
+ doi = {10.57967/hf/5531}, /* DOI for the TQB++ collection */
183
+ howpublished = {\\url{https://huggingface.co/datasets/vincentkoc/tiny_qa_benchmark_pp}},
184
+ note = {See also: \\url{https://github.com/vincentkoc/tiny_qa_benchmark_pp}}
185
+ }
186
  ```