Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
File size: 8,228 Bytes
a85f0d0
 
c75f977
a85f0d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4eda372
 
a85f0d0
 
afcb27b
 
a85f0d0
4eda372
 
afcb27b
 
c75f977
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd90f08
 
c75f977
 
bd90f08
0731370
c75f977
bd90f08
c75f977
bd90f08
 
a85f0d0
 
 
 
 
 
 
c75f977
 
 
 
 
 
d803602
 
 
 
 
 
 
 
 
a85f0d0
d803602
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b471208
d803602
 
b347ebc
 
 
b471208
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b347ebc
 
d803602
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
---
dataset_info:
- config_name: nb
  features:
  - name: id
    dtype: string
  - name: question_stem
    dtype: string
  - name: choices
    struct:
    - name: label
      sequence: string
    - name: text
      sequence: string
  - name: answer
    dtype: string
  - name: fact
    dtype: string
  - name: curated
    dtype: bool
  splits:
  - name: train
    num_bytes: 691423
    num_examples: 2886
  - name: test
    num_bytes: 89887
    num_examples: 376
  download_size: 445496
  dataset_size: 781310
- config_name: nn
  features:
  - name: id
    dtype: string
  - name: question_stem
    dtype: string
  - name: choices
    struct:
    - name: label
      sequence: string
    - name: text
      sequence: string
  - name: answer
    dtype: string
  - name: fact
    dtype: string
  - name: curated
    dtype: bool
  splits:
  - name: train
    num_bytes: 43819
    num_examples: 163
  - name: test
    num_bytes: 23397
    num_examples: 90
  download_size: 50213
  dataset_size: 67216
configs:
- config_name: nb
  data_files:
  - split: train
    path: nb/train-*
  - split: test
    path: nb/test-*
- config_name: nn
  data_files:
  - split: train
    path: nn/train-*
  - split: test
    path: nn/test-*
license: mit
task_categories:
- question-answering
language:
- nb
- nn
pretty_name: NorOpenBookQA
size_categories:
- 1K<n<10K
---

# Dataset Card for NorOpenBookQA

## Dataset Details

### Dataset Description

NorOpenBookQA is a multiple-choice question answering (QA) dataset designed for zero- and few-shot evaluation of language models' world knowledge. NorOpenBookQA counts 3.5k examples in both written standards of Norwegian: Bokmål and Nynorsk (the minority variant). Each example consists of an elementary-level science question, four answer choices, and a factual statement that presents the evidence necessary to determine the correct answer. Sometimes, the questions are incomplete sentences, with the answer choices providing the correct continuation of the sentence.

NorOpenBookQA is part of the collection of Norwegian QA datasets, which also includes [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NorCommonSenseQA](https://huggingface.co/datasets/ltg/norcommonsenseqa), [NorTruthfulQA (Multiple Choice)](https://huggingface.co/datasets/ltg/nortruthfulqa_mc), and [NorTruthfulQA (Generation)](https://huggingface.co/datasets/ltg/nortruthfulqa_gen). We describe our high-level dataset creation approach here and provide more details, general statistics, and model evaluation results in our paper.


- **Curated by:** The [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo
- **Language:** Norwegian (Bokmål and Nynorsk)
- **Repository:** [github.com/ltgoslo/norqa](https://github.com/ltgoslo/norqa)
- **Paper:** [aclanthology.org/2025.nodalida-1.43](https://aclanthology.org/2025.nodalida-1.43) (NoDaLiDa/Baltic-HLT 2025)
- **License:** MIT

### Citation

```
@inproceedings{mikhailov-etal-2025-collection,
    title = "A Collection of Question Answering Datasets for {Norwegian}",
    author = "Mikhailov, Vladislav  and
      M{\ae}hlum, Petter  and
      Lang{\o}, Victoria Ovedie Chruickshank  and
      Velldal, Erik  and
      {\O}vrelid, Lilja",
    editor = "Johansson, Richard  and
      Stymne, Sara",
    booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
    month = mar,
    year = "2025",
    address = "Tallinn, Estonia",
    publisher = "University of Tartu Library",
    url = "https://aclanthology.org/2025.nodalida-1.43/",
    pages = "397--407",
    ISBN = "978-9908-53-109-0",
    abstract = "This paper introduces a new suite of question answering datasets for Norwegian; NorOpenBookQA, NorCommonSenseQA, NorTruthfulQA, and NRK-Quiz-QA. The data covers a wide range of skills and knowledge domains, including world knowledge, commonsense reasoning, truthfulness, and knowledge about Norway. Covering both of the written standards of Norwegian {--} Bokm{\r{a}}l and Nynorsk {--} our datasets comprise over 10k question-answer pairs, created by native speakers. We detail our dataset creation approach and present the results of evaluating 11 language models (LMs) in zero- and few-shot regimes. Most LMs perform better in Bokm{\r{a}}l than Nynorsk, struggle most with commonsense reasoning, and are often untruthful in generating answers to questions. All our datasets and annotation materials are publicly available."
}
```

### Uses

NorOpenBookQA is intended to be used for zero- and few-shot evaluation of language models for Norwegian.

## Dataset Creation

NorOpenBookQA is created by adapting the [OpenBookQA](https://huggingface.co/datasets/allenai/openbookqa) dataset for English via a two-stage annotation. Our annotation team consists of 21 BA/BSc and MA/MSc students in linguistics and computer science, all native Norwegian speakers. The team is divided into two groups: 19 annotators focus on Bokmål, while two annotators work on Nynorsk.


<details>
 <summary><b>Stage 1: Human annotation and translation</b></summary>
 The annotation task here involves adapting the English examples from OpenBookQA using two strategies.

1. **Manual translation and localization**: The annotators manually translate the original examples, with localization that reflects Norwegian contexts where necessary.
2. **Creative adaptation**: The annotators create new examples in Bokmål and Nynorsk from scratch, drawing inspiration from the shown English examples.
</details>


<details>
 <summary><b>Stage 2: Data Curation</b></summary>
 This stage aims to filter out low-quality examples collected during the first stage. Due to resource constraints, we have curated 67% of the examples (2377 out of 3515), with each example validated by a single annotator. Each annotator receives pairs of the original and translated/localized examples or newly created examples for review. The annotation task here involves two main steps.

1. **Quality judgment**: The annotators judge the overall quality of an example and label any example that is of low quality or requires a substantial revision. Examples like this are not included in our datasets.
2. **Quality control**: The annotators judge spelling, grammar, and natural flow of an example, making minor edits if needed.
</details>


#### Personal and Sensitive Information

The dataset does not contain information considered personal or sensitive.

## Dataset Structure

### Dataset Instances

Each dataset instance looks as follows:

#### Bokmål

```
{
    'id': '1387-31',
    'question_stem': 'Hva er et eksempel på at flammer avgir lys?',
    'choices': {
        'label': ['A', 'B', 'C', 'D'],
        'text': [
            'En ovn er forhåndsvarmet og varsellampen lyser',
            'En fyrstikk tennes for å tenne en sigarett',
            'Et tent stearinlys i et vindu signaliserer til noen',
            'En brann som ble slukket for å sende røyksignaler'
        ], 
    },
    'answer': 'C',
    'fact': 'Flammer avgir lys',
    'curated': True
}
```

#### Nynorsk

```
{
    'id': '810-59',
    'question_stem': 'Konservering',
    'choices': {
        'label': ['A', 'B', 'C', 'D'],
        'text': [
            'fører til meir langvarig tørke av ressursar',
            'fører til meir langvarig tilgjenge av ressursar',
            'fører til større forbruk',
            'fører til mangel på ressursar'
        ],
    },
    'answer': 'B',
    'fact': 'Etter kvart som bruken av ein ressurs avtek, vil tida der ressursen er tilgjengeleg auke',
    'curated': False
}
```

### Dataset Fields

`id`: an example id \
`question_stem`: a question \
`choices`: answer choices (`label`: a list of labels; `text`: a list of possible answers) \
`answer`: the correct answer from the list of labels (A/B/C/D) \
`fact`: a common knowledge core fact associated with the question (92% of the examples contain the fact)  \
`curated`: an indicator of whether an example has been curated or not


## Dataset Card Contact

* Vladislav Mikhailov (vladism@ifi.uio.no)
* Lilja Øvrelid (liljao@ifi.uio.no)