Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Qwen3-Inspired Pre-training Dataset
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
This is a demo version with sampled data from each source.
|
| 5 |
+
|
| 6 |
+
This dataset is a curated mixture of high-quality text data designed for large language model pre-training, inspired by the Qwen3 methodology.
|
| 7 |
+
|
| 8 |
+
## Dataset Statistics
|
| 9 |
+
|
| 10 |
+
**Total Size:** 6.78 billion tokens
|
| 11 |
+
|
| 12 |
+
### Data Sources
|
| 13 |
+
|
| 14 |
+
- **dclm_baseline**: 3.71B tokens (54.78%) - 2,981,871 documents
|
| 15 |
+
- **mini_pile**: 1.43B tokens (21.09%) - 998,100 documents
|
| 16 |
+
- **the_stack**: 0.82B tokens (12.09%) - 148,195 documents
|
| 17 |
+
- **common_corpus**: 0.82B tokens (12.04%) - 194,877 documents
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
## Data Processing Pipeline
|
| 21 |
+
|
| 22 |
+
1. **Data Collection**: Sourced from multiple high-quality datasets
|
| 23 |
+
2. **Standardization**: All data transformed to consistent format with `text`, `info`, and `source_data` fields
|
| 24 |
+
3. **Exact Deduplication**: Removed identical documents
|
| 25 |
+
4. **Near Deduplication**: Used MinHashLSH with Jaccard similarity threshold of 0.85
|
| 26 |
+
5. **Quality Filtering**: Applied content-based filtering during processing
|
| 27 |
+
|
| 28 |
+
## Data Format
|
| 29 |
+
|
| 30 |
+
Each example contains:
|
| 31 |
+
- `text`: The main text content
|
| 32 |
+
- `info`: Metadata from the original dataset (as string)
|
| 33 |
+
- `source_data`: Source dataset identifier
|
| 34 |
+
|
| 35 |
+
## Tokenization
|
| 36 |
+
|
| 37 |
+
Token counts were computed using the Llama3 tokenizer (`meta-llama/Meta-Llama-3-8B`).
|
| 38 |
+
|
| 39 |
+
## Usage
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
from datasets import load_dataset
|
| 43 |
+
|
| 44 |
+
dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data")
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Dataset Sources
|
| 48 |
+
|
| 49 |
+
The dataset combines data from the following sources:
|
| 50 |
+
- **DCLM Baseline**: High-quality web text from DataComp-LM
|
| 51 |
+
- **Common Corpus**: Curated common crawl data
|
| 52 |
+
- **The Stack**: Deduplicated source code
|
| 53 |
+
- **Mini Pile**: Academic and reference texts
|
| 54 |
+
|
| 55 |
+
## License
|
| 56 |
+
|
| 57 |
+
Please refer to the individual source dataset licenses. This mixture is provided for research purposes.
|