File size: 1,104 Bytes
f2b8758
 
 
 
 
 
 
c1d78d7
 
ee6dbc7
 
c1d78d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6ee6bc
850ab4e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
task_categories:
- text-generation
language:
- en
size_categories:
- 1B<n<10B
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cb46191b705cc951e88e6c/B2CigjwWXk6wPt6rAXu35.png)

**Dataset:** LLaDA-Sample-10BT  
**Base:** `HuggingFaceFW/fineweb` (subset `sample-10BT`)  
**Purpose:** Training LLaDA (Large Language Diffusion Models)

## Preprocessing
- **Tokenizer:** `GSAI-ML/LLaDA-8B-Instruct`  
- **Chunking:** Up to **4,096 tokens** per chunk (1% of chunks randomly sized between 1–4,096 tokens)  
- **Noisy masking:** Applied with noise factor ε = 1×10⁻³  
- **Fields per chunk (PyTorch tensors):**  
  - `input_ids`  
  - `noisy_input_ids`  
  - `mask`  
  - `t` (time scalar)

## Statistics
- **Total chunks:** ~2,520,000  
- **Shards:** 252 `.pt` files  
- **Chunks per file:** 10,000  
- **Average file size:** ~702–708 MB  
- **Total size:** ~166 GB

## Usage
This dataset is used for training in the [LLaDA-from-scratch](https://github.com/F4k3r22/LLaDA-from-scratch) GitHub repository, where you’ll find the full data pipeline and training scripts.