WikiCulture / README.md
Divyanshudiv's picture
Update README.md
1145702 verified
metadata
dataset_name: WikiCulture
language:
  - en
license: mit
task_categories:
  - text-generation
size_categories:
  - 100K<n<1M
source_datasets:
  - wikipedia
configs:
  - config_name: AF
    data_files:
      - split: train
        path: AF.parquet
  - config_name: AS
    data_files:
      - split: train
        path: AS.parquet
  - config_name: AU
    data_files:
      - split: train
        path: AU.parquet
  - config_name: CH
    data_files:
      - split: train
        path: CH.parquet
  - config_name: EU
    data_files:
      - split: train
        path: EU.parquet
  - config_name: LA
    data_files:
      - split: train
        path: LA.parquet
  - config_name: ME
    data_files:
      - split: train
        path: ME.parquet
  - config_name: NA
    data_files:
      - split: train
        path: NA.parquet

Wikipedia Culture Dataset (nDNA/WikiCulture)

Overview

This dataset provides coarse-grained geographic culture labels for English Wikipedia articles, mapped into eight buckets:

  • NA: North America
  • EU: Europe
  • AU: Oceania (UN M49 “Oceania”)
  • AS: Asia (excluding Western Asia and Greater China)
  • CH: Greater China (CN, HK, MO, TW)
  • AF: Africa
  • LA: Latin America & Caribbean (Americas excluding Northern America)
  • ME: Middle East (UN M49 “Western Asia”)

The labels are intended for controlled sampling and stratified analysis of Wikipedia content by broad region. They are not intended as fine-grained cultural or ethnographic ground truth.

Data Sources

The dataset is constructed from three public components:

  1. UN M49 country/area regional classification (UN Statistics Division), used to generate a deterministic mapping from ISO3166-1 alpha-2 codes to the eight buckets.
  2. Wikipedia Cultural Diversity dataset, used as the article-level signal source (including per-article ISO country code and Wikidata-derived fields) [https://doi.org/10.6084/m9.figshare.7039514].
  3. wikimedia/wikipedia (Hugging Face dataset), used to attach the full article text via a join.

Labeling Methodology

1) Deterministic ISO2 → bucket mapping (UN M49)

An ISO3166-1 alpha-2 code is mapped to one of the eight buckets using UN M49 region/subregion fields:

  • EU if UN M49 Region = Europe
  • AF if Region = Africa
  • AU if Region = Oceania
  • ME if Sub-region = Western Asia
  • NA if Region = Americas and Sub-region = Northern America
  • LA if Region = Americas and not Northern America
  • CH if ISO2 ∈ {CN, HK, MO, TW} (Taiwan forced to CH)
  • AS if Region = Asia and not Western Asia and not CH

This mapping is deterministic and versioned.

2) Geo label assignment from CCC

Each article in the CCC dump provides iso3166 (ISO3166-1 alpha-2). The primary label is:

culture_geo = bucket(iso3166)

Rows with missing iso3166 or unmapped ISO2 are excluded.

3) High-precision consistency filtering

To increase label precision and remove cross-regional or ambiguous items, an additional consistency check is applied using CCC’s Wikidata-derived columns:

  • country_wd
  • location_wd

Offline QID → ISO2 bootstrap. Because no online Wikidata queries are used, we construct a bootstrapped mapping from QID to ISO2 by taking the most frequent iso3166 observed for each qitem in the CCC dump.

Evidence extraction. For each article, QIDs are parsed from country_wd and location_wd, mapped to ISO2 using the bootstrapped table, then converted to bucket sets via the UN M49 mapping.

Hard-strong inclusion rule. An article is retained as high-precision if:

  • It has at least one non-empty evidence set from country_wd or location_wd, and
  • If location_wd evidence is present, it must equal {culture_geo} exactly; otherwise,
  • country_wd evidence must equal {culture_geo} exactly.

This rule is deliberately discard-heavy.

4) Middle East retention exception

In the CCC snapshot used, many ME-labeled rows have empty evidence after offline QID resolution (coverage limitations of the bootstrapped mapping). To avoid eliminating nearly all ME-labeled examples, a restricted exception is applied:

  • If culture_geo == "ME" and both evidence sets are empty, the row is retained (geo-only).

Attaching Wikipedia Text (wikimedia/wikipedia join)

The final dataset includes the full article text by merging the filtered labeled table with the Hugging Face dataset wikimedia/wikipedia.

  • A left inner join is performed on the article identifier:

    • CCC / labeled table: page_id
    • wikimedia/wikipedia: id

Only rows that match across both sources are included in the released dataset, ensuring every labeled example has associated article text.

Dataset Fields

Each row contains:

  • page_title (string): Wikipedia article title (from CCC / Wikipedia metadata)
  • Qid (string): Wikidata QID (qitem)
  • culture_geo (string): one of {NA, EU, AU, AS, CH, AF, LA, ME}
  • text (string): The article text (from wikimedia/wikipedia)

Intended Use

This dataset is suitable for:

  • constructing geographically stratified Wikipedia subsets,
  • controlled pretraining mixtures or evaluation subsets by region,
  • studying representation and performance disparities across broad regions.

It is not suitable for:

  • fine-grained cultural identity claims,
  • country-level ground truth without additional validation,
  • resolving inherently transnational or multi-regional topics (which are preferentially excluded by design).

Limitations

  • Labels are coarse and operationalize “culture” as broad geography.
  • The strict filter excludes many global and transnational pages.
  • The offline QID→ISO2 bootstrap is a heuristic and may miss QIDs not well-represented in the CCC rows.
  • The Middle East geo-only exception yields a subset with weaker evidence than the hard-strong subset, but is retained to mitigate coverage loss under offline constraints.
  • The final dataset includes only articles present in both the labeled CCC-derived table and wikimedia/wikipedia after the page_idid join.

Reproducibility

The pipeline is deterministic given:

  • the UN M49 table snapshot used,
  • the CCC dump snapshot used,
  • the fixed mapping rules and filtering rule described above,
  • the wikimedia/wikipedia snapshot/config used for the text join.

Licensing and Attribution

This dataset is derived from Wikipedia-related resources and UN statistical classifications. Users should comply with the licensing and attribution requirements of:

  • Wikipedia content (CC BY-SA and related terms),
  • the CCC dataset’s licensing/terms,
  • UN M49 documentation and terms where applicable.

Citation

If you use this dataset, cite:

  • the CCC dataset (Wikipedia Cultural Diversity / Cultural Context Content) and associated publication,
  • UN M49 (UN Statistics Division),
  • the wikimedia/wikipedia Hugging Face dataset,
  • this dataset repository (nDNA/WikiCulture).