Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -7,21 +7,21 @@ language:
|
|
| 7 |
size_categories:
|
| 8 |
- 1M<n<10M
|
| 9 |
tags:
|
| 10 |
-
- chain-of-thought
|
| 11 |
-
- reasoning
|
| 12 |
- diverse
|
| 13 |
- curated
|
| 14 |
- deduplication
|
|
|
|
| 15 |
- stem
|
| 16 |
- legal
|
| 17 |
- scientific
|
| 18 |
- encyclopedic
|
|
|
|
| 19 |
configs:
|
| 20 |
- config_name: default
|
| 21 |
data_files:
|
| 22 |
- split: train
|
| 23 |
path: cot_diverse_2.5m.parquet
|
| 24 |
-
pretty_name: Diverse
|
| 25 |
dataset_info:
|
| 26 |
features:
|
| 27 |
- name: text
|
|
@@ -39,33 +39,34 @@ dataset_info:
|
|
| 39 |
num_examples: 2500000
|
| 40 |
---
|
| 41 |
|
| 42 |
-
# Diverse
|
| 43 |
|
| 44 |
-
A curated, deduplicated, multi-domain English text dataset
|
| 45 |
|
| 46 |
## Dataset Summary
|
| 47 |
|
| 48 |
| | |
|
| 49 |
|---|---|
|
| 50 |
| **Total samples** | 2,500,000 |
|
|
|
|
| 51 |
| **Language** | English |
|
| 52 |
| **Format** | Parquet (ZSTD compressed) |
|
| 53 |
| **File size** | 4.28 GB |
|
| 54 |
| **Text length** | 200 - 50,000 characters |
|
| 55 |
-
| **Mean length** | 4,656 characters |
|
| 56 |
| **Median length** | 2,439 characters |
|
| 57 |
|
| 58 |
## Source Breakdown
|
| 59 |
|
| 60 |
-
| Source | Samples | Share | Avg Chars | Quality Score | Domain |
|
| 61 |
-
|--------|--------:|------:|----------:|--------------:|--------|
|
| 62 |
-
| FineWeb EDU (broad, 3.0-4.0) | 750,000 | 30% | 4,997 | 3.39 | General educational |
|
| 63 |
-
| DCLM-baseline | 500,000 | 20% | 2,295 | 0.89 | Commonsense / explanatory |
|
| 64 |
-
| FineWeb EDU (high, >= 4.0) | 375,000 | 15% | 4,923 | 4.18 | STEM / high-quality educational |
|
| 65 |
-
| Pile - FreeLaw | 250,000 | 10% | 14,458 | N/A | Legal (court opinions, filings) |
|
| 66 |
-
| Pile - PubMed Abstracts | 250,000 | 10% | 1,335 | N/A | Biomedical / scientific |
|
| 67 |
-
| Pile - StackExchange | 200,000 | 8% | 2,190 | N/A | Technical Q&A |
|
| 68 |
-
| Pile - Wikipedia (en) | 175,000 | 7% | 2,923 | N/A | Encyclopedic |
|
| 69 |
|
| 70 |
## Schema
|
| 71 |
|
|
@@ -105,7 +106,7 @@ Total removed: 93,069 / 3,000,000 (3.1%)
|
|
| 105 |
```python
|
| 106 |
from datasets import load_dataset
|
| 107 |
|
| 108 |
-
ds = load_dataset("blythet/
|
| 109 |
print(ds)
|
| 110 |
# Dataset({
|
| 111 |
# features: ['text', 'id', 'url', 'source', 'quality_score'],
|
|
@@ -121,14 +122,14 @@ high_quality = ds.filter(lambda x: x["quality_score"] is not None and x["quality
|
|
| 121 |
|
| 122 |
## Intended Use
|
| 123 |
|
| 124 |
-
This dataset
|
| 125 |
|
| 126 |
-
-
|
| 127 |
-
-
|
| 128 |
-
-
|
| 129 |
-
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
|
| 133 |
## Limitations
|
| 134 |
|
|
@@ -147,11 +148,11 @@ This dataset is released under **ODC-By** (Open Data Commons Attribution License
|
|
| 147 |
## Citation
|
| 148 |
|
| 149 |
```bibtex
|
| 150 |
-
@dataset{
|
| 151 |
-
title={Diverse
|
| 152 |
author={blythet},
|
| 153 |
year={2025},
|
| 154 |
-
url={https://huggingface.co/datasets/blythet/
|
| 155 |
-
note={2.5M curated, deduplicated multi-domain English texts
|
| 156 |
}
|
| 157 |
```
|
|
|
|
| 7 |
size_categories:
|
| 8 |
- 1M<n<10M
|
| 9 |
tags:
|
|
|
|
|
|
|
| 10 |
- diverse
|
| 11 |
- curated
|
| 12 |
- deduplication
|
| 13 |
+
- multi-domain
|
| 14 |
- stem
|
| 15 |
- legal
|
| 16 |
- scientific
|
| 17 |
- encyclopedic
|
| 18 |
+
- source-text
|
| 19 |
configs:
|
| 20 |
- config_name: default
|
| 21 |
data_files:
|
| 22 |
- split: train
|
| 23 |
path: cot_diverse_2.5m.parquet
|
| 24 |
+
pretty_name: Diverse Source Text Dataset (2.5M)
|
| 25 |
dataset_info:
|
| 26 |
features:
|
| 27 |
- name: text
|
|
|
|
| 39 |
num_examples: 2500000
|
| 40 |
---
|
| 41 |
|
| 42 |
+
# Diverse Source Text Dataset (2.5M)
|
| 43 |
|
| 44 |
+
A curated, deduplicated, multi-domain English text dataset blending 7 sources across STEM, legal, scientific, encyclopedic, Q&A, and general knowledge domains. Designed as high-quality, diverse source material for downstream NLP tasks such as synthetic data generation, fine-tuning, and text analysis.
|
| 45 |
|
| 46 |
## Dataset Summary
|
| 47 |
|
| 48 |
| | |
|
| 49 |
|---|---|
|
| 50 |
| **Total samples** | 2,500,000 |
|
| 51 |
+
| **Estimated tokens** | ~2.8B (GPT-2) / ~2.4B (modern tokenizers) |
|
| 52 |
| **Language** | English |
|
| 53 |
| **Format** | Parquet (ZSTD compressed) |
|
| 54 |
| **File size** | 4.28 GB |
|
| 55 |
| **Text length** | 200 - 50,000 characters |
|
| 56 |
+
| **Mean length** | 4,656 characters (~1,107 tokens) |
|
| 57 |
| **Median length** | 2,439 characters |
|
| 58 |
|
| 59 |
## Source Breakdown
|
| 60 |
|
| 61 |
+
| Source | Samples | Share | Avg Chars | Avg Tok/Doc | Quality Score | Domain |
|
| 62 |
+
|--------|--------:|------:|----------:|------------:|--------------:|--------|
|
| 63 |
+
| FineWeb EDU (broad, 3.0-4.0) | 750,000 | 30% | 4,997 | 1,063 | 3.39 | General educational |
|
| 64 |
+
| DCLM-baseline | 500,000 | 20% | 2,295 | 572 | 0.89 | Commonsense / explanatory |
|
| 65 |
+
| FineWeb EDU (high, >= 4.0) | 375,000 | 15% | 4,923 | 1,023 | 4.18 | STEM / high-quality educational |
|
| 66 |
+
| Pile - FreeLaw | 250,000 | 10% | 14,458 | 3,781 | N/A | Legal (court opinions, filings) |
|
| 67 |
+
| Pile - PubMed Abstracts | 250,000 | 10% | 1,335 | 292 | N/A | Biomedical / scientific |
|
| 68 |
+
| Pile - StackExchange | 200,000 | 8% | 2,190 | 761 | N/A | Technical Q&A |
|
| 69 |
+
| Pile - Wikipedia (en) | 175,000 | 7% | 2,923 | 685 | N/A | Encyclopedic |
|
| 70 |
|
| 71 |
## Schema
|
| 72 |
|
|
|
|
| 106 |
```python
|
| 107 |
from datasets import load_dataset
|
| 108 |
|
| 109 |
+
ds = load_dataset("blythet/diverse-2.5m", split="train")
|
| 110 |
print(ds)
|
| 111 |
# Dataset({
|
| 112 |
# features: ['text', 'id', 'url', 'source', 'quality_score'],
|
|
|
|
| 122 |
|
| 123 |
## Intended Use
|
| 124 |
|
| 125 |
+
This dataset provides high-quality, diverse English text suitable for:
|
| 126 |
|
| 127 |
+
- Synthetic data generation (e.g., chain-of-thought, instruction tuning)
|
| 128 |
+
- Fine-tuning language models across multiple domains
|
| 129 |
+
- Text analysis and NLP research
|
| 130 |
+
- Domain-specific data extraction (legal, scientific, educational, technical)
|
| 131 |
+
|
| 132 |
+
The domain diversity covers STEM, legal reasoning, scientific literature, technical Q&A, encyclopedic knowledge, and general commonsense explanations.
|
| 133 |
|
| 134 |
## Limitations
|
| 135 |
|
|
|
|
| 148 |
## Citation
|
| 149 |
|
| 150 |
```bibtex
|
| 151 |
+
@dataset{diverse_2.5m,
|
| 152 |
+
title={Diverse Source Text Dataset},
|
| 153 |
author={blythet},
|
| 154 |
year={2025},
|
| 155 |
+
url={https://huggingface.co/datasets/blythet/diverse-2.5m},
|
| 156 |
+
note={2.5M curated, deduplicated multi-domain English texts}
|
| 157 |
}
|
| 158 |
```
|