File size: 3,362 Bytes
7680e72
 
 
 
794f716
7680e72
 
 
aca933c
 
 
7680e72
794f716
7680e72
aca933c
 
 
 
 
794f716
 
 
 
aca933c
 
 
 
 
794f716
 
 
 
aca933c
 
 
 
 
7680e72
 
 
 
 
794f716
 
 
 
 
 
7680e72
 
 
 
 
 
 
 
794f716
 
 
 
 
 
7680e72
 
 
 
 
 
 
 
 
794f716
7680e72
794f716
 
 
 
7680e72
 
 
 
 
 
1b37551
7680e72
 
1b37551
7680e72
 
 
 
6f29547
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
# Qwen3-Inspired Pre-training Dataset

## Overview

This dataset is a curated mixture of high-quality text data designed for large language model pre-training, inspired by the Qwen3 methodology. The dataset includes both training and validation splits.

## Dataset Statistics

**Total Size:** 10.42 billion tokens
- **Training Split:** 9.89 billion tokens (94.9%)
- **Validation Split:** 0.53 billion tokens (5.1%)

### Data Sources (Combined)

- **dclm_baseline**: 5.06B tokens (48.56%) - 4,088,916 documents
- **the_stack**: 1.65B tokens (15.79%) - 383,490 documents
- **common_corpus**: 1.5B tokens (14.36%) - 381,841 documents
- **mini_pile**: 1.43B tokens (13.73%) - 999,858 documents
- **math_pile**: 0.79B tokens (7.55%) - 72,936 documents


### Training Split Statistics

- **dclm_baseline**: 4.81B tokens (48.61%) - 3,884,088 documents
- **the_stack**: 1.58B tokens (15.97%) - 363,502 documents
- **common_corpus**: 1.42B tokens (14.37%) - 361,913 documents
- **mini_pile**: 1.36B tokens (13.78%) - 949,859 documents
- **math_pile**: 0.72B tokens (7.26%) - 68,947 documents


### Validation Split Statistics

- **dclm_baseline**: 0.25B tokens (47.69%) - 204,828 documents
- **common_corpus**: 0.08B tokens (14.22%) - 19,928 documents
- **math_pile**: 0.07B tokens (12.89%) - 3,989 documents
- **mini_pile**: 0.07B tokens (12.86%) - 49,999 documents
- **the_stack**: 0.07B tokens (12.33%) - 19,988 documents


## Data Processing Pipeline

1. **Data Collection**: Sourced from multiple high-quality datasets
2. **Standardization**: All data transformed to consistent format with `text`, `info`, and `source_data` fields
3. **Train/Validation Split**: Created 95%/5% splits within each source dataset
4. **Exact Deduplication**: Removed identical documents within each split
5. **Near Deduplication**: Used MinHashLSH with Jaccard similarity threshold of 0.85
6. **Quality Filtering**: Applied content-based filtering during processing
7. **Shuffling**: Applied shuffling within each large shard for better data distribution

## Data Format

Each example contains:
- `text`: The main text content
- `info`: Metadata from the original dataset (as string)
- `source_data`: Source dataset identifier

## Splits

The dataset contains two splits:
- `train`: Training data (95% of each source dataset)
- `validation`: Validation data (5% of each source dataset)

## Tokenization

Token counts were computed using the Llama3 tokenizer (`meta-llama/Meta-Llama-3-8B`).

## Usage

```python
from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data")

# Load specific splits
train_dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data", split="train")
val_dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data", split="validation")
```

## Dataset Sources

The dataset combines data from the following sources:
- **DCLM Baseline**: High-quality web text from DataComp-LM
- **Common Corpus**: Multilingual web text corpus
- **The Stack**: Deduplicated source code
- **Mini Pile**: Academic and reference texts
- **Math Pile**: Mathematical content and reasoning datasets

## License

Please refer to the individual source dataset licenses. This mixture is provided for research purposes.

## Citation

If you use this dataset, please cite the original source datasets and this work.