blythet commited on
Commit
da650e6
·
verified ·
1 Parent(s): fae7fa2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +28 -27
README.md CHANGED
@@ -7,21 +7,21 @@ language:
7
  size_categories:
8
  - 1M<n<10M
9
  tags:
10
- - chain-of-thought
11
- - reasoning
12
  - diverse
13
  - curated
14
  - deduplication
 
15
  - stem
16
  - legal
17
  - scientific
18
  - encyclopedic
 
19
  configs:
20
  - config_name: default
21
  data_files:
22
  - split: train
23
  path: cot_diverse_2.5m.parquet
24
- pretty_name: Diverse CoT Source Dataset
25
  dataset_info:
26
  features:
27
  - name: text
@@ -39,33 +39,34 @@ dataset_info:
39
  num_examples: 2500000
40
  ---
41
 
42
- # Diverse CoT Source Dataset (2.5M)
43
 
44
- A curated, deduplicated, multi-domain English text dataset designed as source material for synthetic chain-of-thought (CoT) generation. The dataset blends 7 sources across STEM, legal, scientific, encyclopedic, Q&A, and general knowledge domains to maximize reasoning diversity.
45
 
46
  ## Dataset Summary
47
 
48
  | | |
49
  |---|---|
50
  | **Total samples** | 2,500,000 |
 
51
  | **Language** | English |
52
  | **Format** | Parquet (ZSTD compressed) |
53
  | **File size** | 4.28 GB |
54
  | **Text length** | 200 - 50,000 characters |
55
- | **Mean length** | 4,656 characters |
56
  | **Median length** | 2,439 characters |
57
 
58
  ## Source Breakdown
59
 
60
- | Source | Samples | Share | Avg Chars | Quality Score | Domain |
61
- |--------|--------:|------:|----------:|--------------:|--------|
62
- | FineWeb EDU (broad, 3.0-4.0) | 750,000 | 30% | 4,997 | 3.39 | General educational |
63
- | DCLM-baseline | 500,000 | 20% | 2,295 | 0.89 | Commonsense / explanatory |
64
- | FineWeb EDU (high, >= 4.0) | 375,000 | 15% | 4,923 | 4.18 | STEM / high-quality educational |
65
- | Pile - FreeLaw | 250,000 | 10% | 14,458 | N/A | Legal (court opinions, filings) |
66
- | Pile - PubMed Abstracts | 250,000 | 10% | 1,335 | N/A | Biomedical / scientific |
67
- | Pile - StackExchange | 200,000 | 8% | 2,190 | N/A | Technical Q&A |
68
- | Pile - Wikipedia (en) | 175,000 | 7% | 2,923 | N/A | Encyclopedic |
69
 
70
  ## Schema
71
 
@@ -105,7 +106,7 @@ Total removed: 93,069 / 3,000,000 (3.1%)
105
  ```python
106
  from datasets import load_dataset
107
 
108
- ds = load_dataset("blythet/cot-diverse-2.5m", split="train")
109
  print(ds)
110
  # Dataset({
111
  # features: ['text', 'id', 'url', 'source', 'quality_score'],
@@ -121,14 +122,14 @@ high_quality = ds.filter(lambda x: x["quality_score"] is not None and x["quality
121
 
122
  ## Intended Use
123
 
124
- This dataset is designed as **input material for synthetic chain-of-thought generation** using large language models. The domain diversity ensures the resulting CoT data covers:
125
 
126
- - STEM reasoning and mathematical explanations
127
- - Legal analysis and case law interpretation
128
- - Scientific literature comprehension
129
- - Technical problem-solving (StackExchange)
130
- - General knowledge and encyclopedic reasoning
131
- - Everyday commonsense explanations
132
 
133
  ## Limitations
134
 
@@ -147,11 +148,11 @@ This dataset is released under **ODC-By** (Open Data Commons Attribution License
147
  ## Citation
148
 
149
  ```bibtex
150
- @dataset{cot_diverse_2.5m,
151
- title={Diverse CoT Source Dataset},
152
  author={blythet},
153
  year={2025},
154
- url={https://huggingface.co/datasets/blythet/cot-diverse-2.5m},
155
- note={2.5M curated, deduplicated multi-domain English texts for chain-of-thought generation}
156
  }
157
  ```
 
7
  size_categories:
8
  - 1M<n<10M
9
  tags:
 
 
10
  - diverse
11
  - curated
12
  - deduplication
13
+ - multi-domain
14
  - stem
15
  - legal
16
  - scientific
17
  - encyclopedic
18
+ - source-text
19
  configs:
20
  - config_name: default
21
  data_files:
22
  - split: train
23
  path: cot_diverse_2.5m.parquet
24
+ pretty_name: Diverse Source Text Dataset (2.5M)
25
  dataset_info:
26
  features:
27
  - name: text
 
39
  num_examples: 2500000
40
  ---
41
 
42
+ # Diverse Source Text Dataset (2.5M)
43
 
44
+ A curated, deduplicated, multi-domain English text dataset blending 7 sources across STEM, legal, scientific, encyclopedic, Q&A, and general knowledge domains. Designed as high-quality, diverse source material for downstream NLP tasks such as synthetic data generation, fine-tuning, and text analysis.
45
 
46
  ## Dataset Summary
47
 
48
  | | |
49
  |---|---|
50
  | **Total samples** | 2,500,000 |
51
+ | **Estimated tokens** | ~2.8B (GPT-2) / ~2.4B (modern tokenizers) |
52
  | **Language** | English |
53
  | **Format** | Parquet (ZSTD compressed) |
54
  | **File size** | 4.28 GB |
55
  | **Text length** | 200 - 50,000 characters |
56
+ | **Mean length** | 4,656 characters (~1,107 tokens) |
57
  | **Median length** | 2,439 characters |
58
 
59
  ## Source Breakdown
60
 
61
+ | Source | Samples | Share | Avg Chars | Avg Tok/Doc | Quality Score | Domain |
62
+ |--------|--------:|------:|----------:|------------:|--------------:|--------|
63
+ | FineWeb EDU (broad, 3.0-4.0) | 750,000 | 30% | 4,997 | 1,063 | 3.39 | General educational |
64
+ | DCLM-baseline | 500,000 | 20% | 2,295 | 572 | 0.89 | Commonsense / explanatory |
65
+ | FineWeb EDU (high, >= 4.0) | 375,000 | 15% | 4,923 | 1,023 | 4.18 | STEM / high-quality educational |
66
+ | Pile - FreeLaw | 250,000 | 10% | 14,458 | 3,781 | N/A | Legal (court opinions, filings) |
67
+ | Pile - PubMed Abstracts | 250,000 | 10% | 1,335 | 292 | N/A | Biomedical / scientific |
68
+ | Pile - StackExchange | 200,000 | 8% | 2,190 | 761 | N/A | Technical Q&A |
69
+ | Pile - Wikipedia (en) | 175,000 | 7% | 2,923 | 685 | N/A | Encyclopedic |
70
 
71
  ## Schema
72
 
 
106
  ```python
107
  from datasets import load_dataset
108
 
109
+ ds = load_dataset("blythet/diverse-2.5m", split="train")
110
  print(ds)
111
  # Dataset({
112
  # features: ['text', 'id', 'url', 'source', 'quality_score'],
 
122
 
123
  ## Intended Use
124
 
125
+ This dataset provides high-quality, diverse English text suitable for:
126
 
127
+ - Synthetic data generation (e.g., chain-of-thought, instruction tuning)
128
+ - Fine-tuning language models across multiple domains
129
+ - Text analysis and NLP research
130
+ - Domain-specific data extraction (legal, scientific, educational, technical)
131
+
132
+ The domain diversity covers STEM, legal reasoning, scientific literature, technical Q&A, encyclopedic knowledge, and general commonsense explanations.
133
 
134
  ## Limitations
135
 
 
148
  ## Citation
149
 
150
  ```bibtex
151
+ @dataset{diverse_2.5m,
152
+ title={Diverse Source Text Dataset},
153
  author={blythet},
154
  year={2025},
155
+ url={https://huggingface.co/datasets/blythet/diverse-2.5m},
156
+ note={2.5M curated, deduplicated multi-domain English texts}
157
  }
158
  ```