Datasets:
metadata
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: token_ids
sequence: int64
splits:
- name: train
num_bytes: 83423842611
num_examples: 2494618
download_size: 32521124201
dataset_size: 83423842611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- en
splits:
- name: train
num_bytes: 41746813043
num_examples: 2494618
download_size: 9359508369
dataset_size: 41746813043
Processed FineWeb-Edu Dataset
Dataset Name on Hugging Face: PursuitOfDataScience/processed-fineweb-edu
Overview
This dataset is a processed version of the FineWeb-Edu dataset, intended for language model training and NLP research. It has been tokenized and truncated according to a specified block size (i.e., 2048), preparing it for model pre-training or evaluation with transformer-based language models.
Source Dataset
- Name: FineWeb-Edu
- Description: A dataset focused on educational text extracted from the web, designed for language modeling and educational NLP tasks.
- Link: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
- Version: CC-MAIN-2024-10
Processing Steps
The dataset was processed using the Hugging Face Datasets library and a Hugging Face tokenizer. The primary steps include:
- Tokenization: Each
textsample is encoded using the tokenizer’s.encode()method. - Truncation: Token sequences are truncated to a specified
block_size + 1. - Filtering: Any sample with fewer than
block_size + 1tokens is removed. - Saving: The processed data is saved to disk using
ds.save_to_disk(processed_dir).
Below is the code excerpt used to perform these steps:
def load_nonstream_data(data_files, hf_tokenizer, block_size, num_proc=128):
"""
Loads the entire dataset in memory either from a cached processed directory
or processes it in parallel if not yet cached.
Returns a list of token ID sequences.
"""
processed_dir = "processed_data/tokenized_data"
if os.path.exists(processed_dir):
print(f"Loading cached dataset from '{processed_dir}'...")
ds = load_from_disk(processed_dir)
tokenized_data = ds["token_ids"]
return tokenized_data
print("No cached dataset found. Processing in parallel...")
ds_dict = load_dataset("arrow", data_files=data_files, streaming=False)
if "train" in ds_dict:
ds = ds_dict["train"]
else:
ds = ds_dict
def tokenize_and_truncate(example):
text = example["text"] if "text" in example else ""
token_ids = hf_tokenizer.encode(text)
if len(token_ids) < block_size + 1:
return {"token_ids": None}
token_ids = token_ids[:block_size+1]
return {"token_ids": token_ids}
ds = ds.map(
tokenize_and_truncate,
batched=False,
num_proc=num_proc
)
ds = ds.filter(lambda ex: ex["token_ids"] is not None, num_proc=num_proc)
if "text" in ds.column_names:
ds = ds.remove_columns(["text"])
os.makedirs(os.path.dirname(processed_dir), exist_ok=True)
ds.save_to_disk(processed_dir)
print(f"Processed dataset saved to '{processed_dir}'.")
tokenized_data = ds["token_ids"]
return tokenized_data
Dataset Structure
Columns:
token_ids: A list of token IDs representing a truncated text segment.
Splits:
- This dataset is provided as a single split named
train.
- This dataset is provided as a single split named
Intended Use & Applications
- Language Modeling: Suitable for GPT-style or other auto-regressive models, focusing on educational text.
- Fine-Tuning: Can be used to fine-tune existing models on educational text.
- Research: Useful for experimentation in NLP tasks such as text generation.
How to Load
You can load this dataset directly from Hugging Face using the datasets library:
from datasets import load_dataset
dataset = load_dataset("PursuitOfDataScience/processed-fineweb-edu")
print(dataset)