nikolina-p's picture
Update README.md
c7b5b8c verified
metadata
task_categories:
  - text-generation
  - summarization
language:
  - en
pretty_name: FineWeb-Edu 10B tokenized
size_categories:
  - 1M<n<10M

Dataset card for FineWeb-Edu 10B tokenized dataset

This dataset contains tokenized texts from FineWeb-Edu sample-10B HuggingFaceFW/fineweb-edu. The data was tokenized using the OpenAI's tiktoken tokenizer, and structured for efficient streaming and distributed (DDP) training.

Structure

The dataset follows Hugging Face’s recommended structure for efficient streaming in multi-GPU environments. It consists of two splits, where each split contains a number of shards that is a factor of 8 — allowing for efficient distribution of shards across GPU nodes via datasets.distributed.split_dataset_by_node(). Dataset configuration:

train: 
    - num shards: 128
    - num rows per shard: 295
    - num tokensper row: 262.145 (2048x128+1)
validation:
    - num shards: 8
    - num rows per shard: 26
    - num tokens per row: same as train

How to use

Streaming on single GPU:

from datasets import load_dataset

dataset = load_dataset("nikolina-p/fineweb_10BT_tokenized", split="train", streaming=True)
stream = iter(dataset)
tokens = next(stream)["tokens"]

Streaming in a multi-GPU environment:

import os
from datasets import load_dataset
from datasets.distributed import split_dataset_by_node

dataset = load_dataset("nikolina-p/fineweb_10BT_tokenized", split="train", streaming=True)
dataset = split_dataset_by_node(
    dataset,
    rank=int(os.environ["RANK"]),
    world_size=int(os.environ["WORLD_SIZE"])
)
stream = iter(dataset)
tokens = next(stream)["tokens"]