ando55 commited on
Commit
4067dc6
·
verified ·
1 Parent(s): 1c6cc86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -24
README.md CHANGED
@@ -233,45 +233,135 @@ size_categories:
233
  ---
234
 
235
 
236
- # Dataset Card for CNN Dailymail Dataset
237
 
238
  ## Dataset Description
239
 
240
- - **Repository**: https://github.com/ken-ando/WikiSQE
241
- - **Paper**: https://arxiv.org/abs/2305.05928
242
 
243
  ### Dataset Summary
244
 
245
- [WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia](https://arxiv.org/abs/2305.05928) by [Kenichiro Ando](https://ken-ando.github.io/kenichiro_ando/index.html), Satoshi Sekine and Mamoru Komachi (AAAI 2024).
246
 
247
- The WikiSQE Dataset is an English-language dataset containing over 3.4M sentences in Wikipedia. Dataset's sentences seem to be poor quality in some aspects by Wikipedia editors. The aspects of poor qualities are classified into 153 labels. This repository is a split for experiments used in our paper and contains 5 categories and the top 20 most frequent labels. Each subset contains labeled and unlabeled sentences blended at a 1:1 ratio. Whole dataset is in place https://huggingface.co/datasets/ando55/WikiSQE.
248
 
249
- A list of category: ['all', 'citation', 'disputed claim', 'information addition', 'other', 'sac', 'syntactic or semantic revision']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
250
 
251
- A list of labels: ['according to whom', 'attribution needed', 'by whom', 'citation needed', 'clarification needed', 'dead link', 'disambiguation needed', 'dubious', 'needs update', 'neutrality disputed', 'not in citation given', 'original research', 'pronunciation', 'sic', 'unreliable source', 'vague', 'verification needed', 'when', 'which', 'who']
252
 
253
  ### Data Fields
254
 
255
- - `text`: a string feature
256
- - `label`: a ClassLabel feature (1: labeled sentence, 0: non-labeled sentence)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
257
 
258
- ### Label Details and Statistics
 
259
 
260
- See https://github.com/ken-ando/WikiSQE.
 
261
 
262
- ### Citation Information
 
 
 
 
 
 
 
 
 
 
 
263
  ```
 
 
 
 
 
 
264
  @inproceedings{ando-etal-2024-wikisqe,
265
- title = "WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia",
266
- author = "Ando, Kenichiro and
267
- Sekine, Satoshi and
268
- Komachi, Mamoru",
269
- booktitle = "Proceedings of the AAAI Conference on Artificial Intelligence",
270
- volume= "38",
271
- number= "16",
272
- pages= "17656--17663",
273
- year= "2024",
274
- address = "Vancouver, Canada",
275
- publisher = "Association for the Advancement of Artificial Intelligence",
276
  }
277
- ```
 
 
 
233
  ---
234
 
235
 
236
+ # Dataset Card for **WikiSQE\_experiment**
237
 
238
  ## Dataset Description
239
 
240
+ * **Repository**: [https://github.com/ken-ando/WikiSQE](https://github.com/ken-ando/WikiSQE)
241
+ * **Paper**: [https://arxiv.org/abs/2305.05928](https://arxiv.org/abs/2305.05928) (AAAI 2024)
242
 
243
  ### Dataset Summary
244
 
245
+ `WikiSQE_experiment` is the **official evaluation split** for **WikiSQE: A LargeScale Dataset for Sentence Quality Estimation in Wikipedia**.
246
 
247
+ While the parent dataset (`ando55/WikiSQE`) contains **every** sentence flagged with a quality problem in the full edit history of English Wikipedia, **this repo provides the exact train/validation/test partitions used in the AAAI 2024 paper**. It offers **≈ 8.3 million sentences** organised as:
248
 
249
+ * **27 dataset *groups*** (20 frequent quality labels + 5 Quality type categories + 2 Coarse groups)
250
+ * **3 standard splits per group** (`train`, `val`, `test`) – for example `citation/train`, `citation/val`, …
251
+
252
+ > **Need the full dump?** Head to [https://huggingface.co/datasets/ando55/WikiSQE](https://huggingface.co/datasets/ando55/WikiSQE).
253
+
254
+ ---
255
+
256
+ ## Dataset Structure
257
+
258
+ ### Groups (27)
259
+
260
+ | Group | List of labels |
261
+ | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
262
+ | **Quality type categories** (5) | ['citation', 'disputed claim', 'information addition', 'other', 'syntactic or semantic revision'] |
263
+ | **Most‑frequent labels** (20) | ['according to whom', 'attribution needed', 'by whom', 'citation needed', 'clarification needed', 'dead link', 'disambiguation needed', 'dubious', 'needs update', 'neutrality disputed', 'not in citation given', 'original research', 'pronunciation', 'sic', 'unreliable source', 'vague', 'verification needed', 'when', 'which', 'who'] |
264
+ | **Coarse groups** (2) | ['all', 'sac'] |
265
+ **Notes**
266
+
267
+ * **`all`** is a **random subset uniformly sampled from the entire WikiSQE corpus**. Use it when you want a representative slice without downloading the full 3.4 M‑sentence dump.
268
+ * **`sac`** is a **composite set randomly drawn from the three fine‑grained categories `disputed claim`, `information addition`, and `syntactic or semantic revision`**. It was introduced in the paper to study sentence‑level action classification.
269
+
270
+ ### Split sizes
271
+
272
+ | Split | Number of sentences |
273
+ | ------- | ------------------------ |
274
+ | `train` | Depends on labels |
275
+ | `val` | 1 k |
276
+ | `test` | 1 k |
277
 
 
278
 
279
  ### Data Fields
280
 
281
+ | Field | Type | Description |
282
+ | ------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------- |
283
+ | `text` | *string* | Sentence taken from a specific Wikipedia revision |
284
+ | `label` | *int* (0/1) | **1** = sentence is tagged with the current config’s quality issue; **0** = sentence from the same revision **without** that tag |
285
+
286
+
287
+ ---
288
+
289
+ ## Download & Usage
290
+
291
+ ### 1 — Download the Parquet snapshot
292
+
293
+ ```bash
294
+ # Install (if you haven't already)
295
+ pip install --upgrade datasets huggingface_hub
296
+ ```
297
+
298
+ ```python
299
+ from huggingface_hub import snapshot_download
300
+
301
+ repo_dir = snapshot_download(
302
+ repo_id="ando55/WikiSQE_experiment", # this repo
303
+ repo_type="dataset",
304
+ local_dir="WikiSQE_experiment_parquet",
305
+ local_dir_use_symlinks=False,
306
+ )
307
+ print("Saved at:", repo_dir)
308
+ ```
309
+
310
+ This grabs **all 27 configs** (each providing `train`, `val`, `test`) in their native **Parquet** format.
311
+
312
+ ### 2 — Load a split on‑the‑fly
313
+
314
+ Streaming access without a full download:
315
+
316
+ ```python
317
+ from datasets import load_dataset
318
+
319
+ ds = load_dataset(
320
+ "ando55/WikiSQE_experiment",
321
+ name="citation", # choose any config
322
+ split="train",
323
+ streaming=True
324
+ )
325
+ ```
326
+
327
+ ### 3 — (Optionally) Convert Parquet → CSV
328
 
329
+ ```python
330
+ import pyarrow.dataset as ds, pyarrow.csv as pv, pyarrow as pa, pathlib
331
 
332
+ src = pathlib.Path("WikiSQE_experiment_parquet")
333
+ dst = pathlib.Path("WikiSQE_experiment_csv"); dst.mkdir(exist_ok=True)
334
 
335
+ for pq in src.rglob("*.parquet"):
336
+ cfg = pq.parent.name # config name
337
+ split = pq.stem # train/val/test
338
+ print(cfg, split)
339
+ out = dst / f"{cfg}_{split}.csv"
340
+ first = not out.exists()
341
+ dset = ds.dataset(str(pq))
342
+ with out.open("ab") as f, pv.CSVWriter(
343
+ f, dset.schema,
344
+ write_options=pv.WriteOptions(include_header=first)) as w:
345
+ for batch in dset.to_batches():
346
+ w.write_table(pa.Table.from_batches([batch]))
347
  ```
348
+
349
+ ---
350
+
351
+ ## Citation
352
+
353
+ ```bibtex
354
  @inproceedings{ando-etal-2024-wikisqe,
355
+ title = {{WikiSQE}: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia},
356
+ author = {Ando, Kenichiro and Sekine, Satoshi and Komachi, Mamoru},
357
+ booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
358
+ year = {2024},
359
+ volume = {38},
360
+ number = {16},
361
+ pages = {17656--17663},
362
+ address = {Vancouver, Canada},
363
+ publisher = {Association for the Advancement of Artificial Intelligence}
 
 
364
  }
365
+ ```
366
+
367
+ *Happy experimenting!* 🚀