Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
FineWeb-6B: First 6 Billion Tokens
A curated subset of the FineWeb dataset containing the first 6 billion tokens, designed for efficient language model pre-training experiments.
Dataset Description
This dataset contains high-quality web text data suitable for pre-training small to medium-sized language models. It's particularly useful for researchers and practitioners who want to experiment with LLM pre-training without requiring massive computational resources.
Dataset Statistics
| Metric | Value |
|---|---|
| Total Tokens | ~6 billion |
| Raw Data Size | 16.1 GB (parquet) |
| Tokenized Size | 11.3 GB (train) + 57 MB (val) |
| Vocabulary Size | 49,152 |
| Tokenizer | Byte-level BPE |
| Context Length | 2048 tokens |
Usage
Loading the Raw Dataset
from datasets import load_dataset
# Load the parquet file
dataset = load_dataset("ifkash/fineweb-6b")
Loading Pre-tokenized Data
For training, you can use the pre-tokenized binary files which are much faster to load:
import numpy as np
# Load pre-tokenized training data
train_data = np.memmap('tokenized/train.bin', dtype=np.uint16, mode='r')
val_data = np.memmap('tokenized/val.bin', dtype=np.uint16, mode='r')
print(f"Training tokens: {len(train_data):,}")
print(f"Validation tokens: {len(val_data):,}")
Loading the Tokenizer
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained(
"ifkash/fineweb-6b",
subfolder="tokenized"
)
# Example usage
text = "The quick brown fox jumps over the lazy dog"
tokens = tokenizer.encode(text)
print(f"Tokens: {tokens}")
print(f"Decoded: {tokenizer.decode(tokens)}")
Dataset Structure
Files
fineweb-6b.parquet: Raw text data in parquet format (default download)tokenized/train.bin: Pre-tokenized training data (uint16 format)tokenized/val.bin: Pre-tokenized validation data (uint16 format)tokenized/tokenizer.json: Tokenizer vocabulary and mergestokenized/tokenizer_config.json: Tokenizer configurationtokenized/special_tokens_map.json: Special tokens mappingdistillation/: Knowledge distillation data (see below)
Distillation Data
The distillation/ directory contains precomputed teacher logits from SmolLM2-360M for knowledge distillation:
| File | Description | Size (6B tokens) |
|---|---|---|
metadata.json |
Configuration and vocab info | ~1 KB |
train_tokens.bin |
Token IDs (uint16) | ~11.2 GB |
train_topk_ids.bin |
Top-128 token indices | ~1.4 GB |
train_topk_probs.bin |
Top-128 probabilities (float16) | ~1.4 GB |
val_tokens.bin |
Validation token IDs | ~56 MB |
val_topk_ids.bin |
Validation top-128 indices | ~7 MB |
val_topk_probs.bin |
Validation top-128 probs | ~7 MB |
Loading distillation data:
import numpy as np
import json
# Load metadata
with open("distillation/metadata.json") as f:
metadata = json.load(f)
# Load memory-mapped files
tokens = np.memmap("distillation/train_tokens.bin", dtype=np.uint16, mode="r")
topk_ids = np.memmap("distillation/train_topk_ids.bin", dtype=np.uint16, mode="r").reshape(-1, 128)
topk_probs = np.memmap("distillation/train_topk_probs.bin", dtype=np.float16, mode="r").reshape(-1, 128)
print(f"Tokens: {len(tokens):,}")
print(f"Teacher model: {metadata['teacher_model']}")
Data Fields
The parquet file contains:
text: The raw text content
The binary files contain:
- Token IDs as uint16 values (0-49151)
Training a Model
This dataset was used to train ifkash/smol-llama, a 360M parameter LLaMA-style model. See that repository for training code and details.
Example Training Loop
import numpy as np
import torch
def get_batch(split='train', batch_size=64, block_size=2048):
data = np.memmap(f'tokenized/{split}.bin', dtype=np.uint16, mode='r')
ix = torch.randint(len(data) - block_size, (batch_size,))
x = torch.stack([torch.from_numpy(data[i:i+block_size].astype(np.int64)) for i in ix])
y = torch.stack([torch.from_numpy(data[i+1:i+1+block_size].astype(np.int64)) for i in ix])
return x.cuda(), y.cuda()
# Training loop
for step in range(num_steps):
x, y = get_batch('train')
logits, loss = model(x, y)
loss.backward()
optimizer.step()
Tokenizer Details
The tokenizer is a byte-level BPE (Byte Pair Encoding) tokenizer with:
- Vocabulary size: 49,152 tokens
- Special tokens:
<|endoftext|>: End of text marker
- Encoding: UTF-8 byte-level
- Trained on: A sample of the FineWeb dataset
Citation
If you use this dataset, please cite the original FineWeb dataset:
@software{penedo2024fineweb,
author = {Penedo, Guilherme and Kydlíček, Hynek and Lozhkov, Anton and Mitchell, Margaret and Raffel, Colin and Von Werra, Leandro and Wolf, Thomas},
title = {FineWeb: decanting the web for the finest text data at scale},
month = April,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb}
}
License
This dataset is released under the ODC-BY license, following the original FineWeb dataset.
Acknowledgments
- Original dataset: HuggingFaceFW/fineweb
- Pre-training project: ifkash/smol-llama
- Downloads last month
- 36