File size: 1,277 Bytes
ac0c5a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- fineweb
- fineweb-edu
- pretraining
size_categories:
- 1B<n<10B
---

# FineWeb-Sample-5.97B-512

## Dataset Description

This dataset contains approximately **5.97 billion tokens** (5,968,954,880 tokens) sampled from the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset. Each text sample is capped at a maximum of **512 tokens**.

### Dataset Statistics

- **Total Tokens**: ~5.97B (5,968,954,880)
- **Max Tokens per Sample**: 512
- **Max Characters per Sample**: 5,120 (10 chars/token estimate)
- **Source Dataset**: FineWeb-Edu 350BT
- **Random Seed**: 42

### Dataset Structure

The dataset is stored in chunked Parquet files with the following columns:

- `text`: The text content (string, max 5,120 characters)
- `token_count`: Number of tokens in the text (integer, max 512)

### Intended Use

This dataset is designed for:
- Language model pretraining experiments
- Chinchilla-optimal scaling experiments

### Source

Sampled from the [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset, which is a filtered subset of FineWeb focusing on educational content.

### License

This dataset inherits the ODC-By license from FineWeb-Edu.