WikiSQE_experiment / README.md
ando55's picture
Update README.md
e6a3da3 verified
metadata
annotations_creators:
  - no-annotation
language:
  - en
license: cc-by-sa-4.0
multilinguality:
  - monolingual
source_datasets:
  - original
task_categories:
  - text-classification
pretty_name: wikisqe_experiment
configs:
  - config_name: citation
    data_files:
      - split: train
        path: citation/train*
      - split: val
        path: citation/val*
      - split: test
        path: citation/test*
  - config_name: information addition
    data_files:
      - split: train
        path: information addition/train*
      - split: val
        path: information addition/val*
      - split: test
        path: information addition/test*
  - config_name: syntactic or semantic revision
    data_files:
      - split: train
        path: syntactic or semantic revision/train*
      - split: val
        path: syntactic or semantic revision/val*
      - split: test
        path: syntactic or semantic revision/test*
  - config_name: sac
    data_files:
      - split: train
        path: sac/train*
      - split: val
        path: sac/val*
      - split: test
        path: sac/test*
  - config_name: other
    data_files:
      - split: train
        path: other/train*
      - split: val
        path: other/val*
      - split: test
        path: other/test*
  - config_name: all
    data_files:
      - split: train
        path: all/train*
      - split: val
        path: all/val*
      - split: test
        path: all/test*
  - config_name: disputed claim
    data_files:
      - split: train
        path: disputed claim/train*
      - split: val
        path: disputed claim/val*
      - split: test
        path: disputed claim/test*
  - config_name: disambiguation needed
    data_files:
      - split: train
        path: disambiguation needed/train*
      - split: val
        path: disambiguation needed/val*
      - split: test
        path: disambiguation needed/test*
  - config_name: dubious
    data_files:
      - split: train
        path: dubious/train*
      - split: val
        path: dubious/val*
      - split: test
        path: dubious/test*
  - config_name: unreliable source
    data_files:
      - split: train
        path: unreliable source/train*
      - split: val
        path: unreliable source/val*
      - split: test
        path: unreliable source/test*
  - config_name: when
    data_files:
      - split: train
        path: when/train*
      - split: val
        path: when/val*
      - split: test
        path: when/test*
  - config_name: neutrality disputed
    data_files:
      - split: train
        path: neutrality disputed/train*
      - split: val
        path: neutrality disputed/val*
      - split: test
        path: neutrality disputed/test*
  - config_name: verification needed
    data_files:
      - split: train
        path: verification needed/train*
      - split: val
        path: verification needed/val*
      - split: test
        path: verification needed/test*
  - config_name: dead link
    data_files:
      - split: train
        path: dead link/train*
      - split: val
        path: dead link/val*
      - split: test
        path: dead link/test*
  - config_name: not in citation given
    data_files:
      - split: train
        path: not in citation given/train*
      - split: val
        path: not in citation given/val*
      - split: test
        path: not in citation given/test*
  - config_name: needs update
    data_files:
      - split: train
        path: needs update/train*
      - split: val
        path: needs update/val*
      - split: test
        path: needs update/test*
  - config_name: according to whom
    data_files:
      - split: train
        path: according to whom/train*
      - split: val
        path: according to whom/val*
      - split: test
        path: according to whom/test*
  - config_name: original research
    data_files:
      - split: train
        path: original research/train*
      - split: val
        path: original research/val*
      - split: test
        path: original research/test*
  - config_name: pronunciation
    data_files:
      - split: train
        path: pronunciation/train*
      - split: val
        path: pronunciation/val*
      - split: test
        path: pronunciation/test*
  - config_name: by whom
    data_files:
      - split: train
        path: by whom/train*
      - split: val
        path: by whom/val*
      - split: test
        path: by whom/test*
  - config_name: vague
    data_files:
      - split: train
        path: vague/train*
      - split: val
        path: vague/val*
      - split: test
        path: vague/test*
  - config_name: citation needed
    data_files:
      - split: train
        path: citation needed/train*
      - split: val
        path: citation needed/val*
      - split: test
        path: citation needed/test*
  - config_name: who
    data_files:
      - split: train
        path: who/train*
      - split: val
        path: who/val*
      - split: test
        path: who/test*
  - config_name: attribution needed
    data_files:
      - split: train
        path: attribution needed/train*
      - split: val
        path: attribution needed/val*
      - split: test
        path: attribution needed/test*
  - config_name: sic
    data_files:
      - split: train
        path: sic/train*
      - split: val
        path: sic/val*
      - split: test
        path: sic/test*
  - config_name: which
    data_files:
      - split: train
        path: which/train*
      - split: val
        path: which/val*
      - split: test
        path: which/test*
  - config_name: clarification needed
    data_files:
      - split: train
        path: clarification needed/train*
      - split: val
        path: clarification needed/val*
      - split: test
        path: clarification needed/test*
size_categories:
  - 1M<n<10M

Dataset Card for WikiSQE_experiment

Dataset Description

Dataset Summary

WikiSQE_experiment is the official evaluation split for WikiSQE: A Large‑Scale Dataset for Sentence Quality Estimation in Wikipedia.

While the parent dataset (ando55/WikiSQE) contains every sentence flagged with a quality problem in the full edit history of English Wikipedia, this repo provides the exact train/validation/test partitions used in the AAAI 2024 paper. It offers ≈ 8.3 million sentences organised as:

  • 27 dataset groups (20 frequent quality labels + 5 Quality type categories + 2 Coarse groups)
  • 3 standard splits per group (train, val, test) – for example citation/train, citation/val, …

Each split blends labeled and unlabeled sentences at a 1 : 1 ratio to support semi-supervised and positive/negative training paradigms.

Need the full dump? Head to https://huggingface.co/datasets/ando55/WikiSQE.


Dataset Structure

Groups (27)

Group List of labels
Quality type categories (5) ['citation', 'disputed claim', 'information addition', 'other', 'syntactic or semantic revision']
Most‑frequent labels (20) ['according to whom', 'attribution needed', 'by whom', 'citation needed', 'clarification needed', 'dead link', 'disambiguation needed', 'dubious', 'needs update', 'neutrality disputed', 'not in citation given', 'original research', 'pronunciation', 'sic', 'unreliable source', 'vague', 'verification needed', 'when', 'which', 'who']
Coarse groups (2) ['all', 'sac']

Notes

  • all contains a random subset uniformly sampled from the entire WikiSQE corpus. Use it when you want a representative slice without downloading the full 3.4 M‑sentence dump.
  • sac contains a composite set randomly drawn from the three fine‑grained categories disputed claim, information addition, and syntactic or semantic revision. It was introduced in the paper to study sentence‑level action classification.

Split sizes

Split Number of sentences
train Depends on labels
val 1 k
test 1 k

Data Fields

Field Type Description
text string Sentence taken from a specific Wikipedia revision
label int (0/1) 1 = sentence is tagged with the current config’s quality issue; 0 = sentence from the same revision without that tag

Download & Usage

1 — Download the Parquet snapshot

# Install (if you haven't already)
pip install --upgrade datasets huggingface_hub
from huggingface_hub import snapshot_download

repo_dir = snapshot_download(
    repo_id="ando55/WikiSQE_experiment",  # this repo
    repo_type="dataset",
    local_dir="WikiSQE_experiment_parquet",
    local_dir_use_symlinks=False,
)
print("Saved at:", repo_dir)

This grabs all 27 configs (each providing train, val, test) in their native Parquet format.

2 — Load a split on‑the‑fly

Streaming access without a full download:

from datasets import load_dataset

ds = load_dataset(
    "ando55/WikiSQE_experiment",
    name="citation",   # choose any config
    split="train",
    streaming=True
)

3 — (Optionally) Convert Parquet → CSV

The downloaded files are in Parquet format. By converting them to CSV, they can be used for various purposes.

import pyarrow.dataset as ds, pyarrow.csv as pv, pyarrow as pa, pathlib

src = pathlib.Path("WikiSQE_experiment_parquet")
dst = pathlib.Path("WikiSQE_experiment_csv"); dst.mkdir(exist_ok=True)

for pq in src.rglob("*.parquet"):
    cfg   = pq.parent.name  # config name
    split = pq.stem         # train/val/test
    print(cfg, split)
    out   = dst / f"{cfg}_{split}.csv"
    first = not out.exists()
    dset  = ds.dataset(str(pq))
    with out.open("ab") as f, pv.CSVWriter(
            f, dset.schema,
            write_options=pv.WriteOptions(include_header=first)) as w:
        for batch in dset.to_batches():
            w.write_table(pa.Table.from_batches([batch]))

Citation

@inproceedings{ando-etal-2024-wikisqe,
  title     = {{WikiSQE}: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia},
  author    = {Ando, Kenichiro and Sekine, Satoshi and Komachi, Mamoru},
  booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
  year      = {2024},
  volume    = {38},
  number    = {16},
  pages     = {17656--17663},
  address   = {Vancouver, Canada},
  publisher = {Association for the Advancement of Artificial Intelligence}
}

Happy experimenting! 🚀