Datasets:
pretty_name: Wikimedia Enterprise Structured Contents — cywiki_namespace_0
language:
- cy
license: cc-by-sa-4.0
task_categories:
- text-generation
- question-answering
- text-retrieval
tags:
- wikipedia
- wikimedia-enterprise
- structured-contents
size_categories:
- 1M<n<10M
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
cywiki_namespace_0
Structured Contents snapshot of cywiki_namespace_0 from the
Wikimedia Enterprise API,
repackaged as Parquet with a pinned schema.
The upstream Wikimedia Foundation dataset
(wikimedia/structured-wikipedia)
ships NDJSON which has known issues loading via
datasets.load_dataset() — see discussions
#5,
#15,
#16.
This dataset is the same upstream content, normalised so
load_dataset(...) works without specifying a Features override.
Source
- Upstream: Wikimedia Enterprise Structured Contents API
- Snapshot identifier:
cywiki_namespace_0 - Format at source:
.tar.gzcontaining sharded.ndjson - Shards in this release: 1
File layout
README.md ← this file
schema.json ← pinned Arrow schema (IPC + field-name index)
data/
cywiki_namespace_0_0.parquet
cywiki_namespace_0_1.parquet
... (1 files total)
One Parquet file per upstream NDJSON shard. Files are independently downloadable, so consumers can parallelise downloads or pull only the subset they need rather than the whole dataset.
Loading the dataset
Every shard has a byte-identical embedded schema (alphabetised recursively so struct field order is stable across shards). All of these load without extra config:
# Hugging Face datasets
from datasets import load_dataset
ds = load_dataset("VoeTheDon/testing-wiki-structured", split="train", streaming=True)
# Polars
import polars as pl
df = pl.read_parquet("hf://datasets/VoeTheDon/testing-wiki-structured/data/*.parquet")
# DuckDB
import duckdb
duckdb.sql(
"SELECT name, url FROM 'hf://datasets/VoeTheDon/testing-wiki-structured/data/*.parquet' LIMIT 10"
)
# pyarrow
import pyarrow.dataset as pads
table = pads.dataset("hf://datasets/VoeTheDon/testing-wiki-structured/data/", format="parquet").to_table()
Four columns are stored as JSON-encoded strings (see Schema notes below). Decode on read:
import json
row = next(iter(ds))
sections = json.loads(row["sections"]) # list[dict]
infoboxes = json.loads(row["infoboxes"]) # list[dict]
tables = json.loads(row["tables"]) # list[dict]
for ref in row["references"]:
ref_meta = json.loads(ref["metadata"]) # dict
If you need the canonical Arrow schema explicitly (e.g. for a
validation step in a downstream pipeline), it's published as
schema.json at the repo root in Arrow IPC format:
import json, pyarrow as pa
payload = json.loads(open("schema.json").read())
schema = pa.ipc.read_schema(pa.py_buffer(bytes.fromhex(payload["arrow_ipc_hex"])))
Schema notes
Four fields are JSON-encoded rather than stored as native Arrow structs:
| Field | Why |
|---|---|
sections |
Recursive has_parts[].has_parts[]… nesting reaches depth ~100 in real articles. Apache Arrow's C Data Interface caps struct recursion at 64 levels; datasets.load_dataset() round-trips schemas through that interface and rejects deeper structures. |
infoboxes |
Same recursive shape as sections. |
tables |
Shallower today but the same recursive structure — pre-emptively encoded so future upstream changes don't break the schema. |
references[].metadata |
Open dict of Wikipedia citation-template parameters (rft.jtitle, chapter-url, first1, positional "1", …). Hundreds of ad-hoc keys appear across articles, producing a different inferred Arrow struct per shard, which breaks cross-shard schema unification. |
All other fields (name, url, abstract, event, license[],
version, image, main_entity, references[] excluding metadata,
etc.) retain native Arrow struct/list types and are queryable without
decoding.
Struct field order is alphabetised recursively so every shard has a
byte-identical embedded schema — pyarrow's JSON inference otherwise
captures keys in encounter order, which makes Arrow treat
struct<a, b> and struct<b, a> as different types.
Known limitations
- Article categories are not included. The upstream Structured Contents snapshot does not expose categories. If you need them, query the regular WME Snapshots API or the MediaWiki API:Categories endpoint.
- License passes through the upstream Wikipedia license (CC-BY-SA-4.0) for article text.
- This is a Beta-tier upstream snapshot — schema can change between
WME releases. Each version of this dataset re-pins
schema.json; consumers who want a stable contract should pin to a specific dataset revision.