Datasets:
Update README.md
Browse filesAdded introduction to the dataset.
README.md
CHANGED
|
@@ -34,4 +34,110 @@ configs:
|
|
| 34 |
data_files:
|
| 35 |
- split: train
|
| 36 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
data_files:
|
| 35 |
- split: train
|
| 36 |
path: data/train-*
|
| 37 |
+
task_categories:
|
| 38 |
+
- text-generation
|
| 39 |
+
language:
|
| 40 |
+
- en
|
| 41 |
+
splits:
|
| 42 |
+
- name: train
|
| 43 |
+
num_bytes: 41746813043
|
| 44 |
+
num_examples: 2494618
|
| 45 |
+
download_size: 9359508369
|
| 46 |
+
dataset_size: 41746813043
|
| 47 |
+
|
| 48 |
---
|
| 49 |
+
# Processed FineWeb-Edu Dataset
|
| 50 |
+
|
| 51 |
+
**Dataset Name on Hugging Face**: [PursuitOfDataScience/processed-fineweb-edu](https://huggingface.co/datasets/PursuitOfDataScience/processed-fineweb-edu)
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
## Overview
|
| 55 |
+
This dataset is a processed version of the FineWeb-Edu dataset, intended for language model training and NLP research.
|
| 56 |
+
It has been tokenized and truncated according to a specified block size (i.e., 2048), preparing it for model pre-training or evaluation with transformer-based language models.
|
| 57 |
+
|
| 58 |
+
## Source Dataset
|
| 59 |
+
- **Name**: FineWeb-Edu
|
| 60 |
+
- **Description**: A dataset focused on educational text extracted from the web, designed for language modeling and educational NLP tasks.
|
| 61 |
+
- **Link**: *https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu*
|
| 62 |
+
- **Version**: CC-MAIN-2024-10
|
| 63 |
+
|
| 64 |
+
## Processing Steps
|
| 65 |
+
The dataset was processed using the [Hugging Face Datasets library](https://github.com/huggingface/datasets) and a Hugging Face tokenizer. The primary steps include:
|
| 66 |
+
|
| 67 |
+
1. **Tokenization**: Each `text` sample is encoded using the tokenizer’s `.encode()` method.
|
| 68 |
+
2. **Truncation**: Token sequences are truncated to a specified `block_size + 1`.
|
| 69 |
+
3. **Filtering**: Any sample with fewer than `block_size + 1` tokens is removed.
|
| 70 |
+
4. **Saving**: The processed data is saved to disk using `ds.save_to_disk(processed_dir)`.
|
| 71 |
+
|
| 72 |
+
Below is the code excerpt used to perform these steps:
|
| 73 |
+
|
| 74 |
+
```python
|
| 75 |
+
def load_nonstream_data(data_files, hf_tokenizer, block_size, num_proc=128):
|
| 76 |
+
"""
|
| 77 |
+
Loads the entire dataset in memory either from a cached processed directory
|
| 78 |
+
or processes it in parallel if not yet cached.
|
| 79 |
+
Returns a list of token ID sequences.
|
| 80 |
+
"""
|
| 81 |
+
|
| 82 |
+
processed_dir = "processed_data/tokenized_data"
|
| 83 |
+
if os.path.exists(processed_dir):
|
| 84 |
+
print(f"Loading cached dataset from '{processed_dir}'...")
|
| 85 |
+
ds = load_from_disk(processed_dir)
|
| 86 |
+
tokenized_data = ds["token_ids"]
|
| 87 |
+
return tokenized_data
|
| 88 |
+
|
| 89 |
+
print("No cached dataset found. Processing in parallel...")
|
| 90 |
+
|
| 91 |
+
ds_dict = load_dataset("arrow", data_files=data_files, streaming=False)
|
| 92 |
+
if "train" in ds_dict:
|
| 93 |
+
ds = ds_dict["train"]
|
| 94 |
+
else:
|
| 95 |
+
ds = ds_dict
|
| 96 |
+
|
| 97 |
+
def tokenize_and_truncate(example):
|
| 98 |
+
text = example["text"] if "text" in example else ""
|
| 99 |
+
token_ids = hf_tokenizer.encode(text)
|
| 100 |
+
if len(token_ids) < block_size + 1:
|
| 101 |
+
return {"token_ids": None}
|
| 102 |
+
token_ids = token_ids[:block_size+1]
|
| 103 |
+
return {"token_ids": token_ids}
|
| 104 |
+
|
| 105 |
+
ds = ds.map(
|
| 106 |
+
tokenize_and_truncate,
|
| 107 |
+
batched=False,
|
| 108 |
+
num_proc=num_proc
|
| 109 |
+
)
|
| 110 |
+
ds = ds.filter(lambda ex: ex["token_ids"] is not None, num_proc=num_proc)
|
| 111 |
+
|
| 112 |
+
if "text" in ds.column_names:
|
| 113 |
+
ds = ds.remove_columns(["text"])
|
| 114 |
+
|
| 115 |
+
os.makedirs(os.path.dirname(processed_dir), exist_ok=True)
|
| 116 |
+
ds.save_to_disk(processed_dir)
|
| 117 |
+
print(f"Processed dataset saved to '{processed_dir}'.")
|
| 118 |
+
|
| 119 |
+
tokenized_data = ds["token_ids"]
|
| 120 |
+
return tokenized_data
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
## Dataset Structure
|
| 124 |
+
- **Columns**:
|
| 125 |
+
- `token_ids`: A list of token IDs representing a truncated text segment.
|
| 126 |
+
|
| 127 |
+
- **Splits**:
|
| 128 |
+
- This dataset is provided as a single split named `train`.
|
| 129 |
+
|
| 130 |
+
## Intended Use & Applications
|
| 131 |
+
- **Language Modeling**: Suitable for GPT-style or other auto-regressive models, focusing on educational text.
|
| 132 |
+
- **Fine-Tuning**: Can be used to fine-tune existing models on educational text.
|
| 133 |
+
- **Research**: Useful for experimentation in NLP tasks such as text generation.
|
| 134 |
+
|
| 135 |
+
## How to Load
|
| 136 |
+
You can load this dataset directly from Hugging Face using the `datasets` library:
|
| 137 |
+
|
| 138 |
+
```python
|
| 139 |
+
from datasets import load_dataset
|
| 140 |
+
|
| 141 |
+
dataset = load_dataset("PursuitOfDataScience/processed-fineweb-edu")
|
| 142 |
+
print(dataset)
|
| 143 |
+
```
|