Datasets:
Tasks:
Text Generation
Formats:
json
Languages:
English
Size:
10K - 100K
Tags:
physics-filtering
information-theory
entropy-maximization
clean-data
data-curation
pretraining
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,6 +11,8 @@ tags:
|
|
| 11 |
- information-theory
|
| 12 |
- entropy-maximization
|
| 13 |
- clean-data
|
|
|
|
|
|
|
| 14 |
pretty_name: Palladium-1M
|
| 15 |
configs:
|
| 16 |
- config_name: default
|
|
@@ -23,10 +25,61 @@ configs:
|
|
| 23 |
|
| 24 |
**Palladium-1M** is a curated dataset of ~1 million high-entropy, high-sophistication documents (13.5GB), mined from the open web using a novel **Physics-Based Filtration System**.
|
| 25 |
|
| 26 |
-
Unlike standard filters that rely on heuristics or keywords, the **Palladium Refinery** uses **Information Theory (ZSTD Compression Ratios)** and **Linguistic Density** to mathematically distinguish "Signal" from "Noise."
|
| 27 |
|
| 28 |
The result is a dataset that trains models **significantly faster** and achieves **lower perplexity** per compute unit compared to standard web corpora (e.g., FineWeb).
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
## 📊 The "Palladium Advantage" (Benchmark Results)
|
| 31 |
|
| 32 |
To verify the quality of the data, we conducted a controlled "Battle Run" fine-tuning a **Qwen 2.5 (1.5B)** model.
|
|
@@ -35,7 +88,8 @@ To verify the quality of the data, we conducted a controlled "Battle Run" fine-t
|
|
| 35 |
* **Experimental Group:** Palladium-1M (Physics-Filtered Data).
|
| 36 |
* **Training Duration:** 1 Epoch Equivalent (30 Steps).
|
| 37 |
|
| 38 |
-
###
|
|
|
|
| 39 |
The model trained on Palladium-1M achieved a **12.5% lower final loss** than the control group, with significantly higher training stability (lower gradient norm variance).
|
| 40 |
|
| 41 |
<p align="center">
|
|
@@ -45,43 +99,52 @@ The model trained on Palladium-1M achieved a **12.5% lower final loss** than the
|
|
| 45 |
| Metric | Dirty Web (FineWeb) | Palladium-1M (Clean) | Improvement |
|
| 46 |
| :--- | :--- | :--- | :--- |
|
| 47 |
| **Final Loss** | 2.58 | **2.26** | **-12.5%** |
|
| 48 |
-
| **Gradient Stability** | High Variance | Smooth Convergence | **
|
| 49 |
-
| **Compute Efficiency** | Baseline | **1.2x - 1.5x** | **High** |
|
| 50 |
|
| 51 |
-
|
| 52 |
|
| 53 |
## 🔬 Methodology: The Physics of Information
|
| 54 |
|
| 55 |
Most datasets are filtered by "Quality Classifiers" (LLMs trained to spot bad text). This is circular and expensive.
|
| 56 |
|
| 57 |
**Project Palladium** takes a first-principles approach:
|
| 58 |
-
1. **Entropy Analysis:** We measure the compressibility of every document. Low entropy (highly compressible) indicates repetition, boilerplate, or SEO spam.
|
| 59 |
-
2. **Sophistication Scoring:** We map the linguistic complexity using grade-level heuristics.
|
| 60 |
-
3. **The "Goldilocks" Zone:** We discard the bottom 90% of the web that falls below a proprietary **Signal-to-Noise Threshold**.
|
| 61 |
|
| 62 |
-
|
|
|
|
|
|
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
This dataset is compatible with the Hugging Face `datasets` library.
|
| 67 |
-
|
| 68 |
-
## 🔐 Access & Licensing
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
-
|
| 73 |
-
* **Pre-training:** Small Language Models (SLMs) that need to be data-efficient.
|
| 74 |
-
* **Fine-tuning:** Specialized models for finance, law, or science.
|
| 75 |
-
* **RAG Systems:** High-quality knowledge bases without the fluff.
|
| 76 |
|
| 77 |
-
|
| 78 |
-
* **Contact:** scott@palladiumtrain.com
|
| 79 |
-
* **Organization:** Palladium Data
|
| 80 |
|
| 81 |
```python
|
| 82 |
from datasets import load_dataset
|
| 83 |
|
| 84 |
-
# Load the Preview (
|
| 85 |
dataset = load_dataset("PalladiumData/Palladium-1M-Preview", split="train")
|
| 86 |
|
|
|
|
| 87 |
print(dataset[0])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
- information-theory
|
| 12 |
- entropy-maximization
|
| 13 |
- clean-data
|
| 14 |
+
- data-curation
|
| 15 |
+
- pretraining
|
| 16 |
pretty_name: Palladium-1M
|
| 17 |
configs:
|
| 18 |
- config_name: default
|
|
|
|
| 25 |
|
| 26 |
**Palladium-1M** is a curated dataset of ~1 million high-entropy, high-sophistication documents (13.5GB), mined from the open web using a novel **Physics-Based Filtration System**.
|
| 27 |
|
| 28 |
+
Unlike standard filters that rely on heuristics or keywords, the **Palladium Refinery** uses **Information Theory (ZSTD Compression Ratios)** and **Linguistic Density** to mathematically distinguish "Signal" from "Noise."
|
| 29 |
|
| 30 |
The result is a dataset that trains models **significantly faster** and achieves **lower perplexity** per compute unit compared to standard web corpora (e.g., FineWeb).
|
| 31 |
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
## 📋 Datasheet
|
| 35 |
+
|
| 36 |
+
| Metric | Value |
|
| 37 |
+
|---|---|
|
| 38 |
+
| **Documents (preview)** | 10,000 |
|
| 39 |
+
| **Documents (full dataset)** | ~1,000,000 |
|
| 40 |
+
| **Full Dataset Size** | 13.5 GB |
|
| 41 |
+
| **Total Tokens (preview)** | 23,665,387 (23.7M) |
|
| 42 |
+
| **Tokens/Doc (mean)** | 2,367 |
|
| 43 |
+
| **Tokens/Doc (median)** | 1,296 |
|
| 44 |
+
| **Tokens/Doc (range)** | 112 – 102,832 |
|
| 45 |
+
| **Compression Ratio (mean)** | 2.32x |
|
| 46 |
+
| **Reading Level (mean)** | Grade 11.1 |
|
| 47 |
+
| **Edu Score (mean)** | 3.76 |
|
| 48 |
+
| **Edu Score (median)** | 3.72 |
|
| 49 |
+
| **Tokenizer** | cl100k_base (BPE) |
|
| 50 |
+
|
| 51 |
+
### Domain Distribution
|
| 52 |
+
|
| 53 |
+
| Domain | Docs | % |
|
| 54 |
+
|---|---|---|
|
| 55 |
+
| Biology / Medicine | 3,321 | 33.2% |
|
| 56 |
+
| Computer Science | 1,354 | 13.5% |
|
| 57 |
+
| Earth / Environmental Science | 1,245 | 12.4% |
|
| 58 |
+
| General / Other | 982 | 9.8% |
|
| 59 |
+
| Mathematics | 901 | 9.0% |
|
| 60 |
+
| Physics | 656 | 6.6% |
|
| 61 |
+
| Engineering | 588 | 5.9% |
|
| 62 |
+
| Law / Policy | 379 | 3.8% |
|
| 63 |
+
| Chemistry | 325 | 3.2% |
|
| 64 |
+
| Economics / Finance | 181 | 1.8% |
|
| 65 |
+
| Philosophy / Humanities | 68 | 0.7% |
|
| 66 |
+
|
| 67 |
+
### Data Quality Visualizations
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
## 📊 The "Palladium Advantage" (Benchmark Results)
|
| 84 |
|
| 85 |
To verify the quality of the data, we conducted a controlled "Battle Run" fine-tuning a **Qwen 2.5 (1.5B)** model.
|
|
|
|
| 88 |
* **Experimental Group:** Palladium-1M (Physics-Filtered Data).
|
| 89 |
* **Training Duration:** 1 Epoch Equivalent (30 Steps).
|
| 90 |
|
| 91 |
+
### Key Result: 12.5% Lower Loss
|
| 92 |
+
|
| 93 |
The model trained on Palladium-1M achieved a **12.5% lower final loss** than the control group, with significantly higher training stability (lower gradient norm variance).
|
| 94 |
|
| 95 |
<p align="center">
|
|
|
|
| 99 |
| Metric | Dirty Web (FineWeb) | Palladium-1M (Clean) | Improvement |
|
| 100 |
| :--- | :--- | :--- | :--- |
|
| 101 |
| **Final Loss** | 2.58 | **2.26** | **-12.5%** |
|
| 102 |
+
| **Gradient Stability** | High Variance | Smooth Convergence | **Significant** |
|
|
|
|
| 103 |
|
| 104 |
+
---
|
| 105 |
|
| 106 |
## 🔬 Methodology: The Physics of Information
|
| 107 |
|
| 108 |
Most datasets are filtered by "Quality Classifiers" (LLMs trained to spot bad text). This is circular and expensive.
|
| 109 |
|
| 110 |
**Project Palladium** takes a first-principles approach:
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
+
1. **Entropy Analysis:** We measure the compressibility of every document using ZSTD compression ratios. Low entropy (highly compressible) text indicates repetition, boilerplate, or SEO spam.
|
| 113 |
+
2. **Sophistication Scoring:** We map the linguistic complexity using grade-level heuristics and vocabulary density.
|
| 114 |
+
3. **The "Goldilocks" Zone:** We discard the bottom ~90% of the web that falls below our Signal-to-Noise Threshold.
|
| 115 |
|
| 116 |
+
The remaining ~10% is **Palladium**: Pure, dense information.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
|
| 118 |
+
---
|
| 119 |
|
| 120 |
+
## 🛠️ Usage
|
|
|
|
|
|
|
|
|
|
| 121 |
|
| 122 |
+
This dataset is compatible with the Hugging Face `datasets` library.
|
|
|
|
|
|
|
| 123 |
|
| 124 |
```python
|
| 125 |
from datasets import load_dataset
|
| 126 |
|
| 127 |
+
# Load the Preview (10K Samples)
|
| 128 |
dataset = load_dataset("PalladiumData/Palladium-1M-Preview", split="train")
|
| 129 |
|
| 130 |
+
print(f"Documents: {len(dataset)}")
|
| 131 |
print(dataset[0])
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
## 🔐 Access & Licensing
|
| 137 |
+
|
| 138 |
+
This repository contains a **10,000-document preview** of the full dataset.
|
| 139 |
+
|
| 140 |
+
The full **13.5GB Industrial Dataset (1M+ Docs)** is available for commercial licensing. It is designed for:
|
| 141 |
+
|
| 142 |
+
* **Pre-training** small language models (1B–7B) that need to be data-efficient.
|
| 143 |
+
* **Fine-tuning** specialized models for finance, law, science, or engineering.
|
| 144 |
+
* **RAG systems** that need high-quality knowledge bases without boilerplate.
|
| 145 |
+
|
| 146 |
+
**For full access, commercial licensing, or custom Refinery curation services:**
|
| 147 |
+
|
| 148 |
+
* **Email:** [scott@palladiumtrain.com](mailto:scott@palladiumtrain.com)
|
| 149 |
+
* **Web:** [palladiumtrain.com](https://www.palladiumtrain.com)
|
| 150 |
+
* **Organization:** Palladium Data
|