LeMoussel commited on
Commit
2daf9da
·
verified ·
1 Parent(s): 80256d2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -47,9 +47,10 @@ dataset_info:
47
  - [Data Instances](#data-instances)
48
  - [Data Fields](#data-fields)
49
  - [Example Usage (Python)](#example-usage-python)
50
- - [Intended Use](#intended-use)
51
  - [Dataset Statistics](#dataset-statistics)
52
  - [Token Statistics](#token-statistics)
 
53
  - [License](#license)
54
  - [Aknowledgements](#aknowledgements)
55
  - [Citation](#citation)
@@ -130,9 +131,7 @@ Estimated using [sentencepiece](https://github.com/google/sentencepiece) token
130
  - Estimated total tokens: ~1.9B tokens
131
  - Average number of tokens per document: ~4 120 tokens
132
 
133
- ## Quality Signal: CCNet Perplexity
134
-
135
- ### Overview
136
 
137
  `CCNet Perplexity` is a linguistic quality indicator used to measure how close a given text is to a high-quality reference corpus, typically Wikipedia.
138
  It is commonly employed in large-scale dataset filtering pipelines for language model training, in order to identify noisy, malformed, or out-of-domain content.
@@ -141,7 +140,7 @@ In this dataset, the score is computed using the open-source implementation prov
141
 
142
  The value is exposed in the `quality_signals` field under the key `ccnet_perplexity`.
143
 
144
- ### Score Interpretation
145
 
146
  - **Low perplexity (~100–300)**
147
  Text is linguistically close to Wikipedia:
@@ -163,11 +162,11 @@ Lower perplexity indicates that the text is more likely under the Wikipedia-trai
163
 
164
  ⚠️ A high perplexity score does not necessarily imply low semantic value, but rather a **linguistic distance** from the Wikipedia domain.
165
 
166
- ### Observations for the Wikisource FR Dataset
167
 
168
  For this dataset:
169
 
170
- ![CCNet Perplexity Histogram](wikisource_fr_ccnet_perplexity_histogram.png)
171
 
172
  - **Median CCNet Perplexity: 298.84**
173
 
@@ -210,3 +209,6 @@ Many thanks to the [Wikimedia Foundation](https://wikimediafoundation.org/) for
210
  url = "https://huggingface.co/datasets/LeMoussel/wikisource_fr"
211
  }
212
  ```
 
 
 
 
47
  - [Data Instances](#data-instances)
48
  - [Data Fields](#data-fields)
49
  - [Example Usage (Python)](#example-usage-python)
50
+ - [Intended Use][def]
51
  - [Dataset Statistics](#dataset-statistics)
52
  - [Token Statistics](#token-statistics)
53
+ - [Quality Signal: CCNet Perplexity](#quality-signal-ccnet-perplexity)
54
  - [License](#license)
55
  - [Aknowledgements](#aknowledgements)
56
  - [Citation](#citation)
 
131
  - Estimated total tokens: ~1.9B tokens
132
  - Average number of tokens per document: ~4 120 tokens
133
 
134
+ ### Quality Signal: CCNet Perplexity
 
 
135
 
136
  `CCNet Perplexity` is a linguistic quality indicator used to measure how close a given text is to a high-quality reference corpus, typically Wikipedia.
137
  It is commonly employed in large-scale dataset filtering pipelines for language model training, in order to identify noisy, malformed, or out-of-domain content.
 
140
 
141
  The value is exposed in the `quality_signals` field under the key `ccnet_perplexity`.
142
 
143
+ #### Score Interpretation
144
 
145
  - **Low perplexity (~100–300)**
146
  Text is linguistically close to Wikipedia:
 
162
 
163
  ⚠️ A high perplexity score does not necessarily imply low semantic value, but rather a **linguistic distance** from the Wikipedia domain.
164
 
165
+ #### Observations for the Wikisource FR Dataset
166
 
167
  For this dataset:
168
 
169
+ ![CCNet Perplexity Histogram](images/wikisource_fr_ccnet_perplexity_histogram.png)
170
 
171
  - **Median CCNet Perplexity: 298.84**
172
 
 
209
  url = "https://huggingface.co/datasets/LeMoussel/wikisource_fr"
210
  }
211
  ```
212
+
213
+
214
+ [def]: #intended-use