Sudehsna commited on
Commit
5259cd0
·
verified ·
1 Parent(s): 3c234e8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +27 -5
README.md CHANGED
@@ -5,7 +5,7 @@ colorTo: red
5
  sdk: static
6
  configs:
7
  - config_name: default
8
- data_files: "parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl"
9
  default: true
10
  ---
11
 
@@ -38,6 +38,9 @@ In total there were three different datasets used for alignment:
38
  **Original Dataset:**
39
  - Best aligned data: `matched_0.60_0.005_relative.jsonl`
40
  - Total aligned document pairs: 54350
 
 
 
41
  - Total entries in embeddings:
42
  - Romansh: 208321
43
  - German: 106559
@@ -56,6 +59,9 @@ In total there were three different datasets used for alignment:
56
  **Filtered:**
57
  - Best aligned data: `matched_0.60_0.005_relative.jsonl`
58
  - Total aligned document pairs: 53486
 
 
 
59
  - Total entries in embeddings:
60
  - Romansh: 200226
61
  - German: 106559
@@ -75,6 +81,9 @@ In total there were three different datasets used for alignment:
75
  **More Filtered:**
76
  - Best aligned data: `matched_0.60_0.005_relative.jsonl`
77
  - Total aligned document pairs: 51958
 
 
 
78
  - Total entries in embeddings:
79
  - Romansh: 175195
80
  - German: 106559
@@ -88,6 +97,7 @@ In total there were three different datasets used for alignment:
88
  - Std Dev: 0.046871048829964373
89
  - Threshold range: 0.50 - 0.80
90
 
 
91
  ![Threshold comparison for more filtered](threshold_comparison_tld.png)
92
 
93
 
@@ -102,8 +112,7 @@ In total there were three different datasets used for alignment:
102
  `penalized_sim = cos_sim - α * (abs(len_rm - len_de) / max(len_rm, len_de))`
103
 
104
  Overall best aligned parallel data: `parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl`
105
-
106
- The most filtered dataset combined with a relative length penalty (ALPHA=0.005) and a similarity threshold of 0.60 is the best aligned parallel data.
107
  The heavy filtering removes noisy sentences before alignment, the relative penalty reduces mismatches without discarding too many valid pairs, and the 0.60 threshold balances precision and coverage. This setting consistently produced the cleanest and most reliable alignments in our comparisons.
108
 
109
 
@@ -111,7 +120,7 @@ The heavy filtering removes noisy sentences before alignment, the relative penal
111
 
112
  We tested cosine similarity thresholds (0.50–0.80) with two length-penalty settings: absolute (ALPHA=0.005) and relative (ALPHA=0.005). Across all datasets, the relative penalty consistently produced more matches, as it scales with sentence length and is less restrictive.
113
 
114
- The absolute penalty sharply reduced matches, especially above 0.55, while the relative penalty maintained higher counts but declined with increasing thresholds. In all datasets, 0.50–0.60 offered the best trade-off between quantity and precision, with the relative penalty being the more inclusive option and the absolute penalty the most conservative.
115
 
116
 
117
  ## Results from Dataset Analysis
@@ -120,9 +129,22 @@ The absolute penalty sharply reduced matches, especially above 0.55, while the r
120
 
121
  The domain distribution is highly imbalanced: the vast majority of URLs come from a small number of domains. Most notably, `www.rtr.ch` dominates the dataset with nearly 80,000 URLs, followed by `m.rtr.ch` and `www.gr.ch` with significantly fewer entries. All other domains appear less frequently, with fewer than 10,000 URLs each.
122
 
123
- This indicates that the Romansh dataset is heavily influenced by a few large sources, primarily public institutions and media outlets such as Radiotelevisiun Svizra Rumantscha (RTR) and cantonal websites.While this concentration can benefit consistency and linguistic quality, it may also limit the variety of content and styles represented in the dataset. This imbalance should be considered in any downstream tasks, as it could influence model performance and generalizability.
124
  ![Domain Distribution](domain_dist.png)
125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  ### How alignable is the dataset?
127
 
128
  After manually examining the 50 randomly sampled URLs from above, I found that none of the 50 entries had a clearly alignable German counterpart.
 
5
  sdk: static
6
  configs:
7
  - config_name: default
8
+ data_files: parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl"
9
  default: true
10
  ---
11
 
 
38
  **Original Dataset:**
39
  - Best aligned data: `matched_0.60_0.005_relative.jsonl`
40
  - Total aligned document pairs: 54350
41
+ - Romansh tokens: 42,753,157
42
+ - German tokens: 117,237,879
43
+ - Total tokens: 159,991,036
44
  - Total entries in embeddings:
45
  - Romansh: 208321
46
  - German: 106559
 
59
  **Filtered:**
60
  - Best aligned data: `matched_0.60_0.005_relative.jsonl`
61
  - Total aligned document pairs: 53486
62
+ - Romansh tokens: 41,097,668
63
+ - German tokens: 114,789,489
64
+ - Total tokens: 155,887,157
65
  - Total entries in embeddings:
66
  - Romansh: 200226
67
  - German: 106559
 
81
  **More Filtered:**
82
  - Best aligned data: `matched_0.60_0.005_relative.jsonl`
83
  - Total aligned document pairs: 51958
84
+ - Romansh tokens: 39,149,607
85
+ - German tokens: 110,733,850
86
+ - Total tokens: 149,883,457
87
  - Total entries in embeddings:
88
  - Romansh: 175195
89
  - German: 106559
 
97
  - Std Dev: 0.046871048829964373
98
  - Threshold range: 0.50 - 0.80
99
 
100
+
101
  ![Threshold comparison for more filtered](threshold_comparison_tld.png)
102
 
103
 
 
112
  `penalized_sim = cos_sim - α * (abs(len_rm - len_de) / max(len_rm, len_de))`
113
 
114
  Overall best aligned parallel data: `parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl`
115
+ The most filtered dataset combined with a relative length penalty (ALPHA=0.005) and a similarity threshold of 0.60 yielded the best parallel data.
 
116
  The heavy filtering removes noisy sentences before alignment, the relative penalty reduces mismatches without discarding too many valid pairs, and the 0.60 threshold balances precision and coverage. This setting consistently produced the cleanest and most reliable alignments in our comparisons.
117
 
118
 
 
120
 
121
  We tested cosine similarity thresholds (0.50–0.80) with two length-penalty settings: absolute (ALPHA=0.005) and relative (ALPHA=0.005). Across all datasets, the relative penalty consistently produced more matches, as it scales with sentence length and is less restrictive.
122
 
123
+ The absolute penalty sharply reduced matches, especially above 0.60, while the relative penalty maintained higher counts but declined with increasing thresholds. In all datasets, 0.50–0.60 offered the best trade-off between quantity and precision, with the relative penalty being the more inclusive option and the absolute penalty the most conservative.
124
 
125
 
126
  ## Results from Dataset Analysis
 
129
 
130
  The domain distribution is highly imbalanced: the vast majority of URLs come from a small number of domains. Most notably, `www.rtr.ch` dominates the dataset with nearly 80,000 URLs, followed by `m.rtr.ch` and `www.gr.ch` with significantly fewer entries. All other domains appear less frequently, with fewer than 10,000 URLs each.
131
 
132
+ This indicates that the Romansh dataset is heavily influenced by a few large sources, primarily public institutions and media outlets such as Radiotelevisiun Svizra Rumantscha (RTR) and cantonal websites. While this concentration can benefit consistency and linguistic quality, it may also limit the variety of content and styles represented in the dataset. This imbalance should be considered in any downstream tasks, as it could influence model performance and generalizability.
133
  ![Domain Distribution](domain_dist.png)
134
 
135
+ ### Domain Distribution of Final Dataset
136
+
137
+ The final aligned dataset (51'958 document pairs) still shows the same imbalance in domain distribution.
138
+ As in the original corpus, `www.rtr.ch` and `m.rtr.ch` dominate the data, together accounting for
139
+ the majority of documents. Other domains such as `www.gr.ch`, `www.engadinerpost.ch`, and
140
+ `rm.wikipedia.org` contribute smaller proportions, with most domains appearing only rarely.
141
+
142
+ This again indicates that, although the filtering step likely improved the quality of parallel content, the final dataset remains highly imbalanced.
143
+ (See `analysis/romansh_analysis.ipynb` section _More Filtered_ for list of all domains)
144
+
145
+ ![Domain Distribution Final](tld_domains.png)
146
+
147
+
148
  ### How alignable is the dataset?
149
 
150
  After manually examining the 50 randomly sampled URLs from above, I found that none of the 50 entries had a clearly alignable German counterpart.