π FineWeb-Edu-Sentence-Tokenized (Next Sentence Prediction)
A specialized, sentence-split, and tokenized version of the 100BT sample from the finest educational data the π web has to offer, designed specifically for Next Sentence Prediction (NSP) pretraining.
Original FineWeb-Edu Paper: https://arxiv.org/abs/2406.17557
π What is this dataset?
This dataset is a derived, heavily processed version of the 100BT sample from the original π FineWeb-Edu dataset. While the original FineWeb-Edu provides massive, document-level educational text, this specific repository is meticulously formatted to train language models on Next Sentence Prediction (NSP) and related sequential tasks.
We have taken the raw text documents from the 100 billion token sample, broken them down into individual sentences, ordered them, and pre-tokenized them using the standard GPT-2 tokenizer. This eliminates the need for complex, on-the-fly data collation pipelines during training, allowing researchers to load the data and begin pretraining immediately.
π οΈ Key Modifications from Original FineWeb-Edu:
- Sample Size Constraint: We exclusively utilized the
sample-100BTconfiguration from the original dataset to create this manageable yet highly representative corpus. - Sentence Splitting: Every document was processed through a robust sentence tokenizer. Paragraphs and long texts were split into an ordered array of discrete sentences.
- Sequential Indexing (
sent_idx): To facilitate NSP tasks, every sentence within a parent document is assigned a positional index (sent_idx). The first sentence of a document is index0, the second is1, and so on. This allows data loaders to easily fetch sentence pairs (, ) for training. - Pre-Tokenization (
token_ids): Every sentence has been pre-processed using theQwen/Qwen3-0.6Btokenizer. The dataset stores lists of integer token IDs rather than requiring raw text tokenization at runtime.
π Dataset Structure and Features
The dataset is stored in the highly efficient .parquet format. Upon loading, each row represents a single, tokenized sentence belonging to a larger parent document.
Feature Definitions:
id(string): A unique identifier (URN/UUID) linking the sentence back to its original, parent document in the FineWeb-Edu dataset. All sentences originating from the same document share the exact sameid.sent_idx(int64): The zero-based positional index of the sentence within its parent document. Crucial for determining sentence order.sentence(string): The raw, human-readable text of the extracted sentence.token_ids(list[int32]): An array of integer token IDs corresponding to thesentence, generated specifically using the standardgpt2tokenizer vocabulary.
Cleaned Dataset Statistics (Qwen Tokenized)
This dataset has been strictly filtered to ensure high-quality training data for VSA and NSP tasks.
| Metric | Value |
|---|---|
| Total Content Tokens | 41.6 Billion (excluding EOS) |
| Total Sentences | 1.60 Billion |
| Total Documents | 71.1 Million |
| Tokenizer | Qwen/Qwen3-0.6B |
| Vocab Size | 151,669 (includes extra EOS) |
π‘οΈ Filtering & Cleaning Criteria
To ensure data quality and stability for training, the following filters were applied to the original 100BT sample:
- Sequence Length Cap: Any document containing a sentence longer than 96 tokens was removed.
- Context Limit: Any document with more than 64 sentences was removed.
- Minimum Depth: Documents with only 1 sentence were removed.
- Artifact Removal:
- Triple Repeats: Documents with 3+ consecutive identical sentences were dropped.
- Bad Characters: Documents containing the Unicode Replacement Character () were dropped.
These strict filters reduced the original ~3.5B sentences to a highly clean core of 1.6B sentences.
Visualizing the Data Structure
As seen in the provided dataset preview, the data is organized sequentially:
Example: Row 0:
id:<urn:uuid:00000045...>,sent_idx:0,sentence: "The virus has been identified...",token_ids:[383, 9471, ...]Row 1:id:<urn:uuid:00000045...>,sent_idx:1,sentence: "At this time, it is unknown...",token_ids:[1629, 428, ...]
Because Row 0 and Row 1 share the same id and have sequential sent_idx values (0 and 1), a model can be trained to predict the token_ids of Row 1 given the token_ids of Row 0.
π Dataset Statistics & Context Awareness
Important: Sentence Prediction vs. Token Prediction
It is crucial to understand that Next Sentence Prediction (NSP) fundamentally differs from standard continuous next-token prediction.
In standard language modeling, tokens are often predicted continuously across document boundaries. However, in this dataset, sentences from different rows (documents) are unrelated. A sentence from a history text (Row A) has no semantic connection to a sentence from a geography text (Row B).
Therefore, unlike token prediction where models might attend across boundaries, you cannot expect a model to predict the first sentence of Row 2 given the last sentence of Row 1. They represent distinct, unrelated information contexts.
Implication for Training:
- Context Reset: For each new row ID, the model's context must start fresh. Do not concatenate sentences across different document IDs.
- Hard Boundaries: Treat the transition between different
ids as a hard boundary where no information should flow.
π» How to Load and Use the Dataset
This dataset is optimized for the Hugging Face datasets library. Because it is pre-tokenized, it is exceptionally fast to stream directly into a PyTorch or TensorFlow training loop.
Loading via datasets
from datasets import load_dataset
# Load the tokenized dataset
# Note: streaming=True is recommended for large datasets
dataset = load_dataset("harithoppil/fine-edu-sentences", split="train", streaming=True)
# Iterate through the dataset
for row in dataset:
document_id = row['id']
sentence_index = row['sent_idx']
raw_text = row['sentence']
gpt2_tokens = row['token_ids']
print(f"Doc: {document_id[:15]}... | Sent {sentence_index}: {raw_text[:50]}...")
break
π§ Constructing Batches for Next Sentence Prediction (NSP)
To utilize this data for NSP, you must group rows by their id and sort them by sent_idx. Here is a conceptual approach to building an NSP data collator:
- Read sequentially: Read rows sequentially from the dataset.
- Verify continuation: Check if
current_row['id'] == previous_row['id']ANDcurrent_row['sent_idx'] == previous_row['sent_idx'] + 1. - Create pairs: If true, you have a valid (Sentence A, Sentence B) pair for positive NSP examples.
- Create negative examples: For contrastive learning, randomly sample a sentence from a different
idto act as a negative (Sentence A, Random Sentence) pair.
π Provenance: The Original FineWeb-Edu Curation
The following section details how the underlying educational text was originally sourced and filtered by the Hugging Face team before our sentence-tokenization process.
A new approach has recently emerged for filtering LLM training datasets: using synthetic data to develop classifiers for identifying educational content. This technique was used in the trainings of LLama3 and Phi3.
To enhance FineWeb's quality, the creators developed an educational quality classifier using annotations generated by LLama3-70B-Instruct to create FineWeb-Edu.
Annotation & Classifier Training
The original creators used Llama3-70B-Instruct to score 500k FineWeb samples for educational quality on a scale of 0 to 5, focusing on grade-school and middle-school level knowledge to avoid heavily biasing towards highly technical papers.
They fine-tuned a Bert-like regression model based on Snowflake-arctic-embed using these annotations. By applying a threshold of 3, they filtered out 92% of the raw web data, leaving a highly refined, deeply educational corpus.
Note: This current repository utilizes the 100 Billion Token sample from that finalized, filtered corpus.
β οΈ Considerations and Limitations
When utilizing this dataset, please be aware of the following constraints:
- Tokenizer Dependency: The
token_idscolumn is strictly tied to theQwen/Qwen3-0.6Btokenizer. If you are training a model with a different vocabulary (e.g., Llama's SentencePiece, or a custom BPE tokenizer), you must ignore thetoken_idscolumn and re-tokenize thesentencestring column using your specific tokenizer. - Sentence Splitting Artifacts: While standard sentence tokenizers are robust, web data is notoriously messy. You may encounter instances where abbreviations (e.g., "Dr.", "e.g.") or erratic punctuation caused premature or incorrect sentence splitting.
- Loss of Cross-Document Context: By strictly splitting into sentences, models trained exclusively on this formatted data may excel at local coherence (sentence-to-sentence) but might struggle with long-range dependency modeling that requires full-document context.
- Inherited Biases: As this dataset is derived from FineWeb, which was sourced from CommonCrawl, it inherently contains the biases, toxicity, and potentially harmful viewpoints present on the open internet, despite the educational filtering.
Citation Information
Please cite the original FineWeb-Edu paper if you utilize this derived dataset in your research:
@misc{lozhkov2024fineweb-edu,
author = { Lozhkov, Anton and Ben Allal, Loubna and von Werra, Leandro and Wolf, Thomas },
title = { FineWeb-Edu: the Finest Collection of Educational Content },
year = 2024,
url = { https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu },
doi = { 10.57967/hf/2497 },
publisher = { Hugging Face }
}
- Downloads last month
- 392