| --- |
| pretty_name: Annotated Wikipedia 2016 |
| license: cc-by-sa-3.0 |
| task_categories: |
| - text-generation |
| - token-classification |
| - text-retrieval |
| language: |
| - en |
| tags: |
| - wikipedia |
| - entity-linking |
| - wikification |
| - hyperlinks |
| - pretraining |
| size_categories: |
| - 1M<n<10M |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: "data/*.parquet" |
| --- |
| |
| # Annotated Wikipedia 2016 |
|
|
| English Wikipedia (~2016 snapshot) processed into JSON, with **inline hyperlinks preserved as gold-aligned entity-link annotations**. Each article carries its full plain text plus a list of `(surface_form, target_uri, character_offset)` tuples — one per wikilink in the source. |
|
|
| This release just re-shards the original ~5 GB stored zip (`extracted/AA/wiki00` … `extracted/NN/wiki47`, 35,148 JSONL chunk files) into parquet with a unified pyarrow schema. No filtering, no text normalisation, no entity-set restriction. Offsets are byte-faithful character offsets into `text` — verified to round-trip exactly. |
|
|
| Mirror of the raw zip: [alvations/stash · wikipedia-json/](https://huggingface.co/datasets/alvations/stash/tree/main/wikipedia-json). |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("alvations/annotated-wiki-2016", split="train", streaming=True) |
| ex = next(iter(ds)) |
| print(ex["title"], "—", len(ex["annotations"]), "links") |
| # Anarchism — 639 links |
| |
| # offsets line up with text spans |
| for a in ex["annotations"][:3]: |
| s = ex["text"][a["offset"]:a["offset"] + len(a["surface_form"])] |
| assert s == a["surface_form"] |
| print(a["surface_form"], "->", a["uri"]) |
| ``` |
|
|
| Streaming is recommended — the full train split is multi-GB and 1M+ rows. |
|
|
| ## Schema |
|
|
| | Column | Type | Description | |
| | ------------- | ------------------------------------------------------------- | ----------- | |
| | `title` | string | Article title, URL-decoded from the wiki URL path. | |
| | `url` | string | Original Wikipedia URL, e.g. `http://en.wikipedia.org/wiki/Anarchism`. | |
| | `wiki_id` | int64 | Wikipedia page ID (the source dump stored it as a 1-element list — flattened here). | |
| | `text` | string | Full article plain text (no markup, no infoboxes). | |
| | `annotations` | list<struct<surface_form: string, uri: string, offset: int32>> | One entry per outgoing wikilink in the article. | |
| | `language` | string | Always `"en"` in this release. | |
| |
| ### Annotation semantics |
| |
| Each annotation marks one outgoing hyperlink in the source wikitext: |
| |
| - `surface_form` — the literal text span as it appears in the article (the link's anchor text). |
| - `uri` — the linked target page, in Wikipedia URL form with underscores, e.g. `Political_philosophy`. Use as-is for joining against page-id tables; no namespace prefix. |
| - `offset` — zero-based **character** offset into `text` where `surface_form` begins. `text[offset : offset + len(surface_form)] == surface_form` is guaranteed. |
|
|
| Anchors are not deduplicated, redirected, or filtered — multiple links to the same target appear separately, and dangling/redirect targets are kept verbatim. |
|
|
| ## Sharding |
|
|
| Parquet shards are grouped by the source zip's two-letter subdirectory (`extracted/<XX>/`): |
|
|
| ``` |
| data/AA.parquet |
| data/AB.parquet |
| ... |
| data/NN.parquet (352 shards total) |
| ``` |
|
|
| Each shard is ~25 MB compressed (snappy), holding roughly 1,800–4,500 articles depending on average article length in that bucket. |
|
|
| ## Source |
|
|
| Re-packaged from the upstream zip `wikipedia-json.zip` (5.0 GB stored / 18 GB uncompressed), which was itself the output of processing an English Wikipedia dump with a wikiextractor-style pipeline that retained wikilinks as offset annotations. The dataset is named `2016` after the apparent dump year of the upstream archive. |
|
|
| ## License |
|
|
| Wikipedia text is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) (with parts also under the GFDL). Attribution: contributors to the English Wikipedia. Reuse must preserve attribution and ShareAlike. |
|
|
| ## Provenance of this release |
|
|
| - Source: `wikipedia-json.zip` — mirrored at [`alvations/stash · wikipedia-json/wikipedia-json.zip`](https://huggingface.co/datasets/alvations/stash/tree/main/wikipedia-json). |
| - Conversion: streamed via `zipfile.ZipFile.open()` without on-disk extraction; one parquet shard per source subdir; nested annotations stored as `list<struct>` for native pyarrow / Polars / DuckDB consumption. |
| - No row filtering, deduplication, redirect resolution, or text cleanup applied. |
|
|