Datasets:
license: cc-by-sa-4.0
task_categories:
- text-generation
- token-classification
- feature-extraction
language:
- en
size_categories:
- 100K<n<1M
pretty_name: 'Stanza-2: Geometry-Aware WikiText'
Dataset Card for Stanza-2
Dataset Description
Stanza-2 is a structurally pristine, mathematically verified NLP dataset designed specifically for multi-task language modeling, custom tokenizer training, and mechanistic interpretability research.
It is a rigorously modernized and annotated derivative of the wikitext-2-raw-v1 corpus. By utilizing the Stanford NLP Stanza pipeline, every word in the corpus has been explicitly mapped to its grammatical, syntactic, and semantic function. Crucially, Stanza-2 preserves document geometry, explicitly labeling Markdown headers to support structure-aware neural architectures.
- Curated by: Jonathan R. Belanger (Exorobourii LLC)
- Language: English (
en) - License: CC-BY-SA-4.0
- Total Rows: 101,455 sentences (~2.46 Million Tokens)
Dataset Structure
Stanza-2 abandons flat-text formatting in favor of Parallel Arrays. Each row in the dataset represents a single sentence. The linguistic features of that sentence are stored in perfectly aligned, equal-length arrays, guaranteeing 1:1 token-to-tag mapping.
Schema
chunk_id(int64): The positional ID of the chunk within the document stream.sentence_id(int64): The positional ID of the sentence within its chunk.raw_text(string): The cleaned, normalized raw string of the sentence.is_header(bool):Trueif the sentence is a structural document header.section_level(int64): The Markdown depth of the header (1-6).0if not a header.tokens(list[str]): The tokenized string sequence.lemmas(list[str]): The base morphological root of each token.upos(list[str]): Universal Part-of-Speech tags.xpos(list[str]): Treebank-specific Part-of-Speech tags.head(list[int64]): The 1-based index of the syntactic parent (Dependency Graph).deprel(list[str]): The syntactic dependency relation to the head token.ner(list[str]): Named Entity Recognition tags in explicit BIOES format.
Methodology & Provenance
1. Cryptographic Ingestion
To prevent silent upstream updates from compromising downstream reproducibility, this dataset was built from a cryptographically verified snapshot of the ggml-org/ci raw mirror.
- Source Archive:
wikitext-2-raw-v1.zip - SHA-256 Checksum:
ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11
2. The Normalization Ledger
The legacy WikiText corpus contains archaic spacing and tokenization artifacts. Prior to semantic enrichment, the text underwent strict, idempotent modernization passes to ensure sub-word tokenizers are not biased by historical formatting:
- Hyphenation: Legacy
@-@artifacts were strictly mapped to standard hyphens (-). - Punctuation Alignment: Floating terminal punctuation (e.g.,
word , word) and floating brackets were realigned to their preceding/succeeding semantic tokens. Note: Vectorized backreferences were routed through standard Python CPU processing to bypass knownlibcudfregex injection vulnerabilities. - Structural Preservation: Legacy
= Header =formats were mapped to standard Markdown (# Header) using strict descending-order regex (H6 down to H1) to prevent partial matching and preserve true document hierarchy.
3. Graph Integrity Protocol
Following the Stanza depparse enrichment, the resulting Parquet files were subjected to a microscopic mathematical audit. The Stanza-2 dataset guarantees 100% structural integrity:
- Dimensional Symmetry: Every parallel array (
tokens,upos,ner, etc.) within a row is guaranteed to be the exact same length. - Root Singularity: Every sentence possesses exactly one dependency root (
head == 0). - Graph Bounds: No dependency head points to an index outside the bounds of the sentence. Note: During the final Phase 4b integrity audit, 8 sentences out of ~101,463 across the training split violated graph bounds or root singularity due to extreme source fragmentation. These 8 rows were surgically dropped to preserve absolute dataset-wide mathematical validity.
4. Structural Grammar Baseline
Analysis of the wiki.train split reveals exactly 451 unique (UPOS, DepRel) structural combinations across ~2.46 million tokens, demonstrating a highly rigid grammatical scaffold suitable for entropy reduction in custom tokenizer design.
Usage
Because the dataset uses PyArrow-backed lists for parallel arrays, loading it into standard ML pipelines is highly efficient:
import pandas as pd
df = pd.read_parquet("hf://datasets/EXOROBOURII/Stanza-Wikitext-2/wiki.train.enriched.parquet")
# Example: Accessing perfectly aligned tokens and their dependency relations
first_sentence_tokens = df.iloc[0]['tokens']
first_sentence_relations = df.iloc[0]['deprel']