Datasets:
File size: 4,566 Bytes
f326721 24d469d f326721 24d469d f326721 24d469d f326721 1ff251f 24d469d f326721 1ff251f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: token_ids
sequence: int64
splits:
- name: train
num_bytes: 83423842611
num_examples: 2494618
download_size: 32521124201
dataset_size: 83423842611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- en
splits:
- name: train
num_bytes: 41746813043
num_examples: 2494618
download_size: 9359508369
dataset_size: 41746813043
---
# Processed FineWeb-Edu Dataset
**Dataset Name on Hugging Face**: [PursuitOfDataScience/processed-fineweb-edu](https://huggingface.co/datasets/PursuitOfDataScience/processed-fineweb-edu)
## Overview
This dataset is a processed version of the FineWeb-Edu dataset, intended for language model training and NLP research.
It has been tokenized and truncated according to a specified block size (i.e., 2048), preparing it for model pre-training or evaluation with transformer-based language models.
## Source Dataset
- **Name**: FineWeb-Edu
- **Description**: A dataset focused on educational text extracted from the web, designed for language modeling and educational NLP tasks.
- **Link**: *https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu*
- **Version**: CC-MAIN-2024-10
## Processing Steps
The dataset was processed using the [Hugging Face Datasets library](https://github.com/huggingface/datasets) and a Hugging Face tokenizer. The primary steps include:
1. **Tokenization**: Each `text` sample is encoded using the tokenizer’s `.encode()` method.
2. **Truncation**: Token sequences are truncated to a specified `block_size + 1`.
3. **Filtering**: Any sample with fewer than `block_size + 1` tokens is removed.
4. **Saving**: The processed data is saved to disk using `ds.save_to_disk(processed_dir)`.
Below is the code excerpt used to perform these steps:
```python
def load_nonstream_data(data_files, hf_tokenizer, block_size, num_proc=128):
"""
Loads the entire dataset in memory either from a cached processed directory
or processes it in parallel if not yet cached.
Returns a list of token ID sequences.
"""
processed_dir = "processed_data/tokenized_data"
if os.path.exists(processed_dir):
print(f"Loading cached dataset from '{processed_dir}'...")
ds = load_from_disk(processed_dir)
tokenized_data = ds["token_ids"]
return tokenized_data
print("No cached dataset found. Processing in parallel...")
ds_dict = load_dataset("arrow", data_files=data_files, streaming=False)
if "train" in ds_dict:
ds = ds_dict["train"]
else:
ds = ds_dict
def tokenize_and_truncate(example):
text = example["text"] if "text" in example else ""
token_ids = hf_tokenizer.encode(text)
if len(token_ids) < block_size + 1:
return {"token_ids": None}
token_ids = token_ids[:block_size+1]
return {"token_ids": token_ids}
ds = ds.map(
tokenize_and_truncate,
batched=False,
num_proc=num_proc
)
ds = ds.filter(lambda ex: ex["token_ids"] is not None, num_proc=num_proc)
if "text" in ds.column_names:
ds = ds.remove_columns(["text"])
os.makedirs(os.path.dirname(processed_dir), exist_ok=True)
ds.save_to_disk(processed_dir)
print(f"Processed dataset saved to '{processed_dir}'.")
tokenized_data = ds["token_ids"]
return tokenized_data
```
## Dataset Structure
- **Columns**:
- `token_ids`: A list of token IDs representing a truncated text segment.
- **Splits**:
- This dataset is provided as a single split named `train`.
## Intended Use & Applications
- **Language Modeling**: Suitable for GPT-style or other auto-regressive models, focusing on educational text.
- **Fine-Tuning**: Can be used to fine-tune existing models on educational text.
- **Research**: Useful for experimentation in NLP tasks such as text generation.
## How to Load
You can load this dataset directly from Hugging Face using the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("PursuitOfDataScience/processed-fineweb-edu")
print(dataset)
```
|