Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
SultanR commited on
Commit
014d17b
·
verified ·
1 Parent(s): 49a0eb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -60
README.md CHANGED
@@ -1,83 +1,66 @@
1
  ---
2
  configs:
3
- - config_name: minhash_deduped
4
- data_files:
5
- - split: train
6
- path: data/minhash_deduped/*
7
- - config_name: consensus
8
- data_files:
9
- - split: train
10
- path: data/consensus/*
11
- - config_name: sentence_deduped
12
- data_files:
13
- - split: train
14
- path: data/sentence_deduped/*
15
  default: minhash_deduped
16
- license: apache-2.0
17
  language:
18
- - ar
19
  ---
20
 
21
  # AraMix
22
 
23
- Arabic pretraining dataset with three subsets:
24
 
25
  ## Subsets
26
 
27
- ### minhash_deduped
28
- - **Documents:** 178.9M
29
- - **Words:** 78.5B
30
- - **Schema:** `text`, `id`, `source`
31
- - MinHash-deduplicated Arabic text from 7 source datasets
32
-
33
- ### consensus
34
- - **Documents:** 47.9M
35
- - **Schema:** `text`, `duplicated`, `source`
36
- - Documents appearing in 2+ independent datasets (high-quality consensus)
37
- - `duplicated`: number of datasets the document appears in (2-7)
38
- - `source`: list of dataset names where the document was found
39
-
40
- ### sentence_deduped
41
- - **Documents:** 167.6M
42
- - **Words:** 71.8B
43
- - **Schema:** `text`, `id`, `source`
44
- - Sentence-level deduplicated version of minhash_deduped
45
- - Removes duplicate 3-sentence spans appearing 3+ times
46
- - 8.1% word reduction from minhash_deduped
47
 
48
  ## Usage
49
 
50
  ```python
51
  from datasets import load_dataset
52
 
53
- # Load minhash_deduped subset (default)
54
  ds = load_dataset("SultanR/AraMix", "minhash_deduped")
55
-
56
- # Load consensus subset
57
  ds = load_dataset("SultanR/AraMix", "consensus")
58
-
59
- # Load sentence_deduped subset
60
- ds = load_dataset("SultanR/AraMix", "sentence_deduped")
61
  ```
62
 
63
- ## Source Datasets
64
- - HPLT 2.0 (Arabic)
65
- - CulturaX (Arabic)
66
- - ArabicWeb24
67
- - ClusterLab 101B
68
- - C4 (Arabic)
69
- - FineWeb-2 (Arabic)
70
- - FinePDFs (Arabic)
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
- ## Sentence Deduplication Stats
73
 
74
- | Source | Input Docs | Output Docs | Docs Removed | Words Removed |
75
- |--------|------------|-------------|--------------|---------------|
76
- | ArabicWeb24 | 34.5M | 33.6M | 956K | 1.28B |
77
- | C4 | 26.9M | 23.0M | 3.9M | 510M |
78
- | ClusterLab 101B | 6.5M | 5.9M | 675K | 668M |
79
- | CulturaX | 44.2M | 40.8M | 3.4M | 1.15B |
80
- | FinePDFs | 670K | 648K | 21K | 11M |
81
- | FineWeb-2 | 32.0M | 30.6M | 1.4M | 1.16B |
82
- | HPLT 2.0 | 34.0M | 33.1M | 947K | 1.57B |
83
- | **Total** | **178.9M** | **167.6M** | **11.3M** | **6.34B** |
 
1
  ---
2
  configs:
3
+ - config_name: minhash_deduped
4
+ data_files:
5
+ - split: train
6
+ path: data/minhash_deduped/*
7
+ - config_name: consensus
8
+ data_files:
9
+ - split: train
10
+ path: data/consensus/*
11
+ - config_name: sentence_deduped
12
+ data_files:
13
+ - split: train
14
+ path: data/sentence_deduped/*
15
  default: minhash_deduped
 
16
  language:
17
+ - ar
18
  ---
19
 
20
  # AraMix
21
 
22
+ AraMix is a deduplicated Arabic pretraining corpus containing 158.8 billion tokens across 167.6 million documents. Rather than scraping the web again, AraMix combines seven publicly available Arabic datasets, applies Arabic-specific quality filtering, and performs cross-dataset deduplication.
23
 
24
  ## Subsets
25
 
26
+ | Subset | Documents | Tokens | Description |
27
+ |--------|-----------|--------|-------------|
28
+ | `sentence_deduped` | 167.6M | 158.8B | MinHash + sentence-level deduplication |
29
+ | `minhash_deduped` | 178.9M | 177.8B | Document-level MinHash deduplication only |
30
+ | `consensus` | 47.9M | 54.1B | Documents appearing in 2+ source datasets |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Usage
33
 
34
  ```python
35
  from datasets import load_dataset
36
 
37
+ ds = load_dataset("SultanR/AraMix", "sentence_deduped")
38
  ds = load_dataset("SultanR/AraMix", "minhash_deduped")
 
 
39
  ds = load_dataset("SultanR/AraMix", "consensus")
 
 
 
40
  ```
41
 
42
+ ## Sources
43
+
44
+ | Source | Tokens | Documents |
45
+ |--------|--------|-----------|
46
+ | CulturaX | 38.4B | 40.8M |
47
+ | ArabicWeb24 | 31.6B | 33.6M |
48
+ | HPLT 2.0 | 30.4B | 33.1M |
49
+ | FineWeb-2 | 24.2B | 30.6M |
50
+ | C4 | 20.4B | 23.0M |
51
+ | ClusterLab 101B | 7.7B | 5.9M |
52
+ | FinePDFs | 6.1B | 648K |
53
+
54
+ ## Pipeline
55
+
56
+ 1. Quality filtering with Arabic-specific thresholds (terminal punctuation, repetition patterns, script ratio)
57
+ 2. Document-level MinHash deduplication (5-gram shingles, 14 bands, 8 hashes per bucket)
58
+ 3. Sentence-level deduplication (3-sentence spans, minimum 3 occurrences)
59
+
60
+ ## Citation
61
+
62
+ TBD
63
 
64
+ ## License
65
 
66
+ See individual source dataset licenses.