File size: 11,627 Bytes
cfcba3e
9b17a09
 
 
 
 
 
 
 
 
 
 
 
 
1c6cc86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfcba3e
9b17a09
 
4067dc6
9b17a09
 
 
4067dc6
 
9b17a09
 
 
4067dc6
9b17a09
4067dc6
9b17a09
4067dc6
 
 
d71aa73
 
4067dc6
 
 
 
 
 
 
 
 
 
 
 
 
e6a3da3
4067dc6
 
4b0fb91
 
4067dc6
 
 
 
 
 
 
 
9b17a09
 
 
 
4067dc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b17a09
e6a3da3
 
4067dc6
 
9b17a09
4067dc6
 
9b17a09
4067dc6
 
 
 
 
 
 
 
 
 
 
 
9b17a09
4067dc6
 
 
 
 
 
9b17a09
4067dc6
 
 
 
 
 
 
 
 
9b17a09
4067dc6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
---
annotations_creators:
- no-annotation
language:
- en
license: cc-by-sa-4.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
pretty_name: wikisqe_experiment
configs:
- config_name: citation
  data_files:
  - split: train
    path: citation/train*
  - split: val
    path: citation/val*
  - split: test
    path: citation/test*
- config_name: information addition
  data_files:
  - split: train
    path: information addition/train*
  - split: val
    path: information addition/val*
  - split: test
    path: information addition/test*
- config_name: syntactic or semantic revision
  data_files:
  - split: train
    path: syntactic or semantic revision/train*
  - split: val
    path: syntactic or semantic revision/val*
  - split: test
    path: syntactic or semantic revision/test*
- config_name: sac
  data_files:
  - split: train
    path: sac/train*
  - split: val
    path: sac/val*
  - split: test
    path: sac/test*
- config_name: other
  data_files:
  - split: train
    path: other/train*
  - split: val
    path: other/val*
  - split: test
    path: other/test*
- config_name: all
  data_files:
  - split: train
    path: all/train*
  - split: val
    path: all/val*
  - split: test
    path: all/test*
- config_name: disputed claim
  data_files:
  - split: train
    path: disputed claim/train*
  - split: val
    path: disputed claim/val*
  - split: test
    path: disputed claim/test*
- config_name: disambiguation needed
  data_files:
  - split: train
    path: disambiguation needed/train*
  - split: val
    path: disambiguation needed/val*
  - split: test
    path: disambiguation needed/test*
- config_name: dubious
  data_files:
  - split: train
    path: dubious/train*
  - split: val
    path: dubious/val*
  - split: test
    path: dubious/test*
- config_name: unreliable source
  data_files:
  - split: train
    path: unreliable source/train*
  - split: val
    path: unreliable source/val*
  - split: test
    path: unreliable source/test*
- config_name: when
  data_files:
  - split: train
    path: when/train*
  - split: val
    path: when/val*
  - split: test
    path: when/test*
- config_name: neutrality disputed
  data_files:
  - split: train
    path: neutrality disputed/train*
  - split: val
    path: neutrality disputed/val*
  - split: test
    path: neutrality disputed/test*
- config_name: verification needed
  data_files:
  - split: train
    path: verification needed/train*
  - split: val
    path: verification needed/val*
  - split: test
    path: verification needed/test*
- config_name: dead link
  data_files:
  - split: train
    path: dead link/train*
  - split: val
    path: dead link/val*
  - split: test
    path: dead link/test*
- config_name: not in citation given
  data_files:
  - split: train
    path: not in citation given/train*
  - split: val
    path: not in citation given/val*
  - split: test
    path: not in citation given/test*
- config_name: needs update
  data_files:
  - split: train
    path: needs update/train*
  - split: val
    path: needs update/val*
  - split: test
    path: needs update/test*
- config_name: according to whom
  data_files:
  - split: train
    path: according to whom/train*
  - split: val
    path: according to whom/val*
  - split: test
    path: according to whom/test*
- config_name: original research
  data_files:
  - split: train
    path: original research/train*
  - split: val
    path: original research/val*
  - split: test
    path: original research/test*
- config_name: pronunciation
  data_files:
  - split: train
    path: pronunciation/train*
  - split: val
    path: pronunciation/val*
  - split: test
    path: pronunciation/test*
- config_name: by whom
  data_files:
  - split: train
    path: by whom/train*
  - split: val
    path: by whom/val*
  - split: test
    path: by whom/test*
- config_name: vague
  data_files:
  - split: train
    path: vague/train*
  - split: val
    path: vague/val*
  - split: test
    path: vague/test*
- config_name: citation needed
  data_files:
  - split: train
    path: citation needed/train*
  - split: val
    path: citation needed/val*
  - split: test
    path: citation needed/test*
- config_name: who
  data_files:
  - split: train
    path: who/train*
  - split: val
    path: who/val*
  - split: test
    path: who/test*
- config_name: attribution needed
  data_files:
  - split: train
    path: attribution needed/train*
  - split: val
    path: attribution needed/val*
  - split: test
    path: attribution needed/test*
- config_name: sic
  data_files:
  - split: train
    path: sic/train*
  - split: val
    path: sic/val*
  - split: test
    path: sic/test*
- config_name: which
  data_files:
  - split: train
    path: which/train*
  - split: val
    path: which/val*
  - split: test
    path: which/test*
- config_name: clarification needed
  data_files:
  - split: train
    path: clarification needed/train*
  - split: val
    path: clarification needed/val*
  - split: test
    path: clarification needed/test*
size_categories:
- 1M<n<10M
---


# Dataset Card for **WikiSQE\_experiment**

## Dataset Description

* **Repository**: [https://github.com/ken-ando/WikiSQE](https://github.com/ken-ando/WikiSQE)
* **Paper**: [https://arxiv.org/abs/2305.05928](https://arxiv.org/abs/2305.05928) (AAAI 2024)

### Dataset Summary

`WikiSQE_experiment` is the **official evaluation split** for **WikiSQE: A Large‑Scale Dataset for Sentence Quality Estimation in Wikipedia**.

While the parent dataset (`ando55/WikiSQE`) contains **every** sentence flagged with a quality problem in the full edit history of English Wikipedia, **this repo provides the exact train/validation/test partitions used in the AAAI 2024 paper**. It offers **≈ 8.3 million sentences** organised as:

* **27 dataset *groups*** (20 frequent quality labels + 5 Quality type categories + 2 Coarse groups)
* **3 standard splits per group** (`train`, `val`, `test`) – for example `citation/train`, `citation/val`, …

Each split blends **labeled** and **unlabeled** sentences at a **1 : 1 ratio** to support semi-supervised and positive/negative training paradigms.

> **Need the full dump?** Head to [https://huggingface.co/datasets/ando55/WikiSQE](https://huggingface.co/datasets/ando55/WikiSQE).

---

## Dataset Structure

### Groups (27)

| Group                         | List of labels                                                                                                                                                                                                                                                                                                                              |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Quality type categories** (5)        | ['citation', 'disputed claim', 'information addition', 'other', 'syntactic or semantic revision']                                                                                                                                                                                                                              |
| **Most‑frequent labels** (20) | ['according to whom', 'attribution needed', 'by whom', 'citation needed', 'clarification needed', 'dead link', 'disambiguation needed', 'dubious', 'needs update', 'neutrality disputed', 'not in citation given', 'original research', 'pronunciation', 'sic', 'unreliable source', 'vague', 'verification needed', 'when', 'which', 'who'] |
| **Coarse groups** (2)        | ['all', 'sac']                                                                                                                                                                                                                              |

**Notes**

* **`all`** contains a **random subset uniformly sampled from the entire WikiSQE corpus**. Use it when you want a representative slice without downloading the full 3.4 M‑sentence dump.
* **`sac`** contains a **composite set randomly drawn from the three fine‑grained categories `disputed claim`, `information addition`, and `syntactic or semantic revision`**. It was introduced in the paper to study sentence‑level action classification.

### Split sizes

| Split   | Number of sentences |
| ------- | ------------------------ | 
| `train` | Depends on labels    |
| `val`   | 1 k                | 
| `test`  | 1 k                 |


### Data Fields

| Field   | Type        | Description                                                                                                                      |
| ------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `text`  | *string*    | Sentence taken from a specific Wikipedia revision                                                                                |
| `label` | *int* (0/1) | **1** = sentence is tagged with the current config’s quality issue; **0** = sentence from the same revision **without** that tag |


---

## Download & Usage

### 1 — Download the Parquet snapshot

```bash
# Install (if you haven't already)
pip install --upgrade datasets huggingface_hub
```

```python
from huggingface_hub import snapshot_download

repo_dir = snapshot_download(
    repo_id="ando55/WikiSQE_experiment",  # this repo
    repo_type="dataset",
    local_dir="WikiSQE_experiment_parquet",
    local_dir_use_symlinks=False,
)
print("Saved at:", repo_dir)
```

This grabs **all 27 configs** (each providing `train`, `val`, `test`) in their native **Parquet** format.

### 2 — Load a split on‑the‑fly

Streaming access without a full download:

```python
from datasets import load_dataset

ds = load_dataset(
    "ando55/WikiSQE_experiment",
    name="citation",   # choose any config
    split="train",
    streaming=True
)
```

### 3 — (Optionally) Convert Parquet → CSV

The downloaded files are in Parquet format. By converting them to CSV, they can be used for various purposes.

```python
import pyarrow.dataset as ds, pyarrow.csv as pv, pyarrow as pa, pathlib

src = pathlib.Path("WikiSQE_experiment_parquet")
dst = pathlib.Path("WikiSQE_experiment_csv"); dst.mkdir(exist_ok=True)

for pq in src.rglob("*.parquet"):
    cfg   = pq.parent.name  # config name
    split = pq.stem         # train/val/test
    print(cfg, split)
    out   = dst / f"{cfg}_{split}.csv"
    first = not out.exists()
    dset  = ds.dataset(str(pq))
    with out.open("ab") as f, pv.CSVWriter(
            f, dset.schema,
            write_options=pv.WriteOptions(include_header=first)) as w:
        for batch in dset.to_batches():
            w.write_table(pa.Table.from_batches([batch]))
```

---

## Citation

```bibtex
@inproceedings{ando-etal-2024-wikisqe,
  title     = {{WikiSQE}: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia},
  author    = {Ando, Kenichiro and Sekine, Satoshi and Komachi, Mamoru},
  booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
  year      = {2024},
  volume    = {38},
  number    = {16},
  pages     = {17656--17663},
  address   = {Vancouver, Canada},
  publisher = {Association for the Advancement of Artificial Intelligence}
}
```

*Happy experimenting!* 🚀