Update README.md
Browse files
README.md
CHANGED
|
@@ -5,4 +5,28 @@ language:
|
|
| 5 |
- es
|
| 6 |
size_categories:
|
| 7 |
- 100M<n<1B
|
| 8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
- es
|
| 6 |
size_categories:
|
| 7 |
- 100M<n<1B
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
**Dataset:** LLaDA-Sample-ES
|
| 11 |
+
**Base:** `crscardellino/spanish_billion_words`
|
| 12 |
+
**Purpose:** Training LLaDA (Large Language Diffusion Models)
|
| 13 |
+
|
| 14 |
+
## Preprocessing
|
| 15 |
+
- **Tokenizer:** `GSAI-ML/LLaDA-8B-Instruct`
|
| 16 |
+
- **Chunking:** Up to **4,096 tokens** per chunk (1% of chunks randomly sized between 1–4,096 tokens)
|
| 17 |
+
- **Noisy masking:** Applied with noise factor ε = 1×10⁻³
|
| 18 |
+
- **Fields per chunk (PyTorch tensors):**
|
| 19 |
+
- `input_ids`
|
| 20 |
+
- `noisy_input_ids`
|
| 21 |
+
- `mask`
|
| 22 |
+
- `t` (time scalar)
|
| 23 |
+
|
| 24 |
+
## Statistics
|
| 25 |
+
- **Total chunks:** ~
|
| 26 |
+
- **Shards:** 8 `.pt` files
|
| 27 |
+
- **Chunks per file:** 10,000
|
| 28 |
+
- **Average file size:** ~702–708 MB
|
| 29 |
+
- **Total size:** ~1 GB
|
| 30 |
+
|
| 31 |
+
## Usage
|
| 32 |
+
This dataset is used for training in the [LLaDA-from-scratch](https://github.com/F4k3r22/LLaDA-from-scratch) GitHub repository, where you’ll find the full data pipeline and training scripts.
|