Datasets:
Tasks:
Text Generation
Formats:
json
Languages:
English
Size:
10K - 100K
Tags:
physics-filtering
information-theory
entropy-maximization
clean-data
data-curation
pretraining
License:
File size: 4,948 Bytes
abe9ca2 5d6a673 abe9ca2 fe38945 abe9ca2 5d6a673 abe9ca2 5d6a673 abe9ca2 5d6a673 abe9ca2 5d6a673 abe9ca2 5d6a673 abe9ca2 5d6a673 abe9ca2 5d6a673 2ffa119 5d6a673 2ffa119 5d6a673 2ffa119 5d6a673 2ffa119 abe9ca2 5d6a673 fd150ab abe9ca2 5d6a673 2ffa119 5d6a673 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 1M<n<10M
tags:
- physics-filtering
- information-theory
- entropy-maximization
- clean-data
- data-curation
- pretraining
pretty_name: Palladium-1M
configs:
- config_name: default
data_files:
- split: train
path: palladium_sample_10k.jsonl
---
# ๐ Palladium-1M: High-Density Information for Efficient LLM Training
**Palladium-1M** is a curated dataset of ~1 million high-entropy, high-sophistication documents (13.5GB), mined from the open web using a novel **Physics-Based Filtration System**.
Unlike standard filters that rely on heuristics or keywords, the **Palladium Refinery** uses **Information Theory (ZSTD Compression Ratios)** and **Linguistic Density** to mathematically distinguish "Signal" from "Noise."
The result is a dataset that trains models **significantly faster** and achieves **lower perplexity** per compute unit compared to standard web corpora (e.g., FineWeb).
---
## ๐ Datasheet
| Metric | Value |
|---|---|
| **Documents (preview)** | 10,000 |
| **Documents (full dataset)** | ~1,000,000 |
| **Full Dataset Size** | 13.5 GB |
| **Total Tokens (preview)** | 23,665,387 (23.7M) |
| **Tokens/Doc (mean)** | 2,367 |
| **Tokens/Doc (median)** | 1,296 |
| **Tokens/Doc (range)** | 112 โ 102,832 |
| **Compression Ratio (mean)** | 2.32x |
| **Reading Level (mean)** | Grade 11.1 |
| **Edu Score (mean)** | 3.76 |
| **Edu Score (median)** | 3.72 |
| **Tokenizer** | cl100k_base (BPE) |
### Domain Distribution
| Domain | Docs | % |
|---|---|---|
| Biology / Medicine | 3,321 | 33.2% |
| Computer Science | 1,354 | 13.5% |
| Earth / Environmental Science | 1,245 | 12.4% |
| General / Other | 982 | 9.8% |
| Mathematics | 901 | 9.0% |
| Physics | 656 | 6.6% |
| Engineering | 588 | 5.9% |
| Law / Policy | 379 | 3.8% |
| Chemistry | 325 | 3.2% |
| Economics / Finance | 181 | 1.8% |
| Philosophy / Humanities | 68 | 0.7% |
### Data Quality Visualizations






---
## ๐ The "Palladium Advantage" (Benchmark Results)
To verify the quality of the data, we conducted a controlled "Battle Run" fine-tuning a **Qwen 2.5 (1.5B)** model.
* **Control Group:** Standard "FineWeb" (Dirty Web Data).
* **Experimental Group:** Palladium-1M (Physics-Filtered Data).
* **Training Duration:** 1 Epoch Equivalent (30 Steps).
### Key Result: 12.5% Lower Loss
The model trained on Palladium-1M achieved a **12.5% lower final loss** than the control group, with significantly higher training stability (lower gradient norm variance).
<p align="center">
<img src="palladium_demo_victory.jpg" width="70%" alt="Palladium Victory Graph">
</p>
| Metric | Dirty Web (FineWeb) | Palladium-1M (Clean) | Improvement |
| :--- | :--- | :--- | :--- |
| **Final Loss** | 2.58 | **2.26** | **-12.5%** |
| **Gradient Stability** | High Variance | Smooth Convergence | **Significant** |
---
## ๐ฌ Methodology: The Physics of Information
Most datasets are filtered by "Quality Classifiers" (LLMs trained to spot bad text). This is circular and expensive.
**Project Palladium** takes a first-principles approach:
1. **Entropy Analysis:** We measure the compressibility of every document using ZSTD compression ratios. Low entropy (highly compressible) text indicates repetition, boilerplate, or SEO spam.
2. **Sophistication Scoring:** We map the linguistic complexity using grade-level heuristics and vocabulary density.
3. **The "Goldilocks" Zone:** We discard the bottom ~90% of the web that falls below our Signal-to-Noise Threshold.
The remaining ~10% is **Palladium**: Pure, dense information.
---
## ๐ ๏ธ Usage
This dataset is compatible with the Hugging Face `datasets` library.
```python
from datasets import load_dataset
# Load the Preview (10K Samples)
dataset = load_dataset("PalladiumData/Palladium-1M-Preview", split="train")
print(f"Documents: {len(dataset)}")
print(dataset[0])
```
---
## ๐ Access & Licensing
This repository contains a **10,000-document preview** of the full dataset.
The full **13.5GB Industrial Dataset (1M+ Docs)** is available for commercial licensing. It is designed for:
* **Pre-training** small language models (1Bโ7B) that need to be data-efficient.
* **Fine-tuning** specialized models for finance, law, science, or engineering.
* **RAG systems** that need high-quality knowledge bases without boilerplate.
**For full access, commercial licensing, or custom Refinery curation services:**
* **Email:** [scott@palladiumtrain.com](mailto:scott@palladiumtrain.com)
* **Web:** [palladiumtrain.com](https://www.palladiumtrain.com)
* **Organization:** Palladium Data |