vincentkoc commited on
Commit
4c11fe6
·
verified ·
1 Parent(s): 36aa6cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -3
README.md CHANGED
@@ -1,3 +1,159 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pretty_name: Tiny QA Evaluation Dataset
6
+ size_categories:
7
+ - n<1K
8
+ tags:
9
+ - question-answering
10
+ - evaluation
11
+ - benchmark
12
+ - toy-dataset
13
+ task_categories:
14
+ - question-answering
15
+ task_ids:
16
+ - generative-qa
17
+ - extractive-qa
18
+ ---
19
+
20
+ # Tiny QA Evaluation Dataset
21
+
22
+ A very small, general-knowledge QA set (16 examples) for quick sanity checks, pipeline smoke-tests, and demoing LLM evaluation workflows.
23
+
24
+ ## Dataset Summary
25
+
26
+ This dataset contains 16 question–answer pairs covering geography, history, math, science, literature, and more. Each example includes:
27
+
28
+ - **text**: the question prompt
29
+ - **label**: the “gold” answer
30
+ - **metadata.context**: a one-sentence fact
31
+ - **tags**: additional annotations (`category`, `difficulty`)
32
+
33
+ It’s intentionally tiny (≈1 KB, under 1 K examples) so you can iterate on data loading, evaluation scripts, or CI steps in under a second.
34
+
35
+ ## Supported Tasks and Formats
36
+
37
+ - **Tasks**:
38
+ - Extractive QA
39
+ - Generative QA
40
+ - **Format**: JSON
41
+ - **Splits**:
42
+ - `train` (all 52 examples)
43
+
44
+ ## Languages
45
+
46
+ - English (`en`)
47
+
48
+ ## Dataset Structure
49
+
50
+ ### Data Fields
51
+
52
+ Each example in `data/train.json` has:
53
+
54
+ | field | type | description |
55
+ |---------------------|--------|----------------------------------------------|
56
+ | `text` | string | The question prompt. |
57
+ | `label` | string | The correct answer. |
58
+ | `metadata` | object | Additional info. |
59
+ | `metadata.context` | string | A one-sentence fact supporting the answer. |
60
+ | `tags.category` | string | Broad question category (e.g. `geography`). |
61
+ | `tags.difficulty` | string | Rough difficulty level (e.g. `easy`). |
62
+
63
+ ## Data Example
64
+
65
+ ```json
66
+ [
67
+ {
68
+ "text": "What is the capital of France?",
69
+ "label": "Paris",
70
+ "metadata": {
71
+ "context": "France is a country in Europe. Its capital is Paris."
72
+ },
73
+ "tags": {
74
+ "category": "geography",
75
+ "difficulty": "easy"
76
+ }
77
+ },
78
+ {
79
+ "text": "What is 2 + 2?",
80
+ "label": "4",
81
+ "metadata": {
82
+ "context": "Basic arithmetic: 2 + 2 equals 4."
83
+ },
84
+ "tags": {
85
+ "category": "math",
86
+ "difficulty": "easy"
87
+ }
88
+ },
89
+ ```
90
+
91
+ ## Data Splits
92
+
93
+ Only one split:
94
+
95
+ - **train**: 52 examples, used for development and quick evaluation.
96
+
97
+ ## Data Creation
98
+
99
+ ### Curation Rationale
100
+
101
+ “Tiny QA Eval” exists to:
102
+
103
+ 1. Smoke-test QA pipelines (loading, preprocessing, evaluation).
104
+ 2. Demo Hugging Face Datasets integration in tutorials.
105
+ 3. Verify model–eval loops run without downloading large corpora.
106
+
107
+ ### Source Data
108
+
109
+ Hand-crafted by the dataset creator from well-known, public-domain facts.
110
+
111
+ ### Annotations
112
+
113
+ Self-annotated. Each `metadata.context` and `tags` field is manually created.
114
+
115
+ ## Usage
116
+
117
+ Load with:
118
+
119
+ ```python
120
+ from datasets import load_dataset
121
+
122
+ ds = load_dataset("vincentkoc/tiny_qa_benchmark")
123
+ print(ds["train"][0])
124
+ # {
125
+ # "text": "What is the capital of France?",
126
+ # "label": "Paris",
127
+ # "metadata": {
128
+ # "context": "France is a country in Europe. Its capital is Paris."
129
+ # },
130
+ # "tags": {
131
+ # "category": "geography",
132
+ # "difficulty": "easy"
133
+ # }
134
+ # }
135
+ ```
136
+
137
+ ## Considerations
138
+
139
+ - **Not a benchmark**: Too few examples for statistical significance.
140
+ - **Do not train**: Use only for smoke-tests or demos.
141
+ - **No sensitive data**: All facts are public domain.
142
+
143
+ ## Licensing
144
+
145
+ Apache-2.0. See [LICENSE](LICENSE) for details.
146
+
147
+ ## Citation
148
+
149
+ If you use this dataset, please cite:
150
+
151
+ ```bibtex
152
+ @misc{tinyqaeval2025,
153
+ title = {Tiny QA Evaluation Dataset},
154
+ author = {Vincent Koc},
155
+ year = {2025},
156
+ howpublished = {\url{https://huggingface.co/vincentkoc/tinytest}},
157
+ license = {Apache-2.0}
158
+ }
159
+ ```