Sudehsna commited on
Commit
55ecbd6
·
verified ·
1 Parent(s): 746a430

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +142 -35
README.md CHANGED
@@ -3,68 +3,154 @@ title: Romansh–German Parallel Dataset (FineWeb-based)
3
  colorFrom: gray
4
  colorTo: red
5
  sdk: static
6
- configs:
7
- - config_name: default
8
- data_files: "parallel_data/matched_0.60_0.005.jsonl"
9
- default: true
10
  ---
11
 
12
  # Romansh–German Parallel Dataset (FineWeb-Based)
13
 
14
  This dataset contains automatically aligned Romansh–German document pairs, extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) using cosine similarity over OpenAI embeddings. It was created as part of a university programming project focused on document-level parallel data extraction.
15
 
16
- ## Contents
17
 
18
- The dataset consists of several alignment outputs based on different cosine similarity thresholds and sentence-length penalty:
 
19
 
20
- threshold_matches/
21
 
22
- ├── matched_{THRESHOLD}_{ALPHA}.jsonl
23
 
24
- ├── matched_0.60_0.005.jsonl
25
 
26
- ...
27
 
28
- With ALPHA being either 0.005 or None
29
 
 
 
30
 
31
- ## Format
 
 
 
32
 
33
- Each `.jsonl` file contains entries with the following fields:
34
 
35
- ```json
36
- {
37
- "romansh_text": "Romansh document text",
38
- "german_text": "Most similar German document text or null",
39
- "similarity": 0.6123, // Score after optional length penalty
40
- "original_similarity": 0.6085 // Raw cosine similarity
41
- }
42
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ## Methodology
45
 
46
- 1. Documents were embedded using [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) `text-embedding-3-small`.
47
- 2. Cosine similarity was computed between Romansh and German documents.
48
- 3. A length penalty was applied:
49
- `penalized_score = cosine_similarity - alpha × |len_diff|`
50
- 4. For each Romansh document, the best-scoring German document was selected.
51
- 5. Only matches above a given threshold were retained.
 
 
 
52
 
53
- **Thresholds tested:** `0.5 – 0.8`
54
- **Final dataset:** `/parallel_data/matched_0.60_0.005.jsonl` ("best" match)
55
 
56
  ## Threshold and Alpha Comparison
57
 
58
- We compared cosine similarity thresholds (0.50–0.80) using two alpha settings: alpha=None (no length regularization) and alpha=0.005 (penalizing length mismatch). alpha=None produced more matches, especially at low thresholds, but with a steep quality drop as the threshold increased. alpha=0.005 yielded fewer but more consistent matches.
 
 
 
 
 
 
 
 
59
 
60
- Conclusion:
61
- For better balance between match quantity and alignment quality, we selected alpha=0.005 with a threshold of 0.60`.
62
 
63
  ### Dataset Quality Note
64
 
65
  As the dataset was aligned using a relatively low cosine similarity threshold (e.g. 0.60) to increase the number of matches it likely reduces alignment quality and may introduce noisy or weakly related sentence pairs. Further filtering or manual inspection would be necessary to improve reliability.
66
 
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ## Evaluation
69
 
70
  A negative gold standard was used to ensure quality: a set of Romansh documents known to have no valid German alignment. These were checked for false positives.
@@ -87,12 +173,33 @@ dataset = load_dataset(
87
  )
88
  ```
89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  ## File Organization
91
 
92
  | File | Description |
93
  | --------------------- | ---------------------------------- |
94
- | `parallel_data/` | All alignment outputs by threshold |
 
 
95
  | `/parallel_data/matched_0.60_0.005.jsonl` | "best match" |
96
- | `embedded` | German and Romansh embeddings |
97
- | `data/` | Original data
 
 
98
 
 
3
  colorFrom: gray
4
  colorTo: red
5
  sdk: static
 
 
 
 
6
  ---
7
 
8
  # Romansh–German Parallel Dataset (FineWeb-Based)
9
 
10
  This dataset contains automatically aligned Romansh–German document pairs, extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) using cosine similarity over OpenAI embeddings. It was created as part of a university programming project focused on document-level parallel data extraction.
11
 
12
+ ## Description
13
 
14
+ This project performs document-level alignment between Romansh and German web texts, which were extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. It uses [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) embeddings and cosine similarity to identify potential parallel texts.
15
+ The full dataset is available on [Hugging Face](https://huggingface.co/datasets/Sudehsna/Romansh_German_Parallel_Data).
16
 
 
17
 
18
+ ## Dataset
19
 
20
+ This project uses the Romansh and German partitions of the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. Both the original and the removed versions of the dataset were used to improve alignment coverage.
21
 
22
+ Since the German dataset contained significantly more entries than the Romansh dataset, it was filtered by retaining only those entries whose domains matched those found in the Romansh dataset.
23
 
 
24
 
25
+ **Dataset Filtering**:
26
+ We first used a brute-force approach by computing cosine similarities between all Romansh and German embeddings without filtering (see `data/embedded/original` and the initial outputs in `data/parallel_data_unfiltered/`on Huggingface). After evaluating the quality of these matches, we refined the Romansh embeddings by removing entries from long-tail domains (i.e., domains with fewer than 3 documents) and deduplicating (`data/embedded/filtered_notail_dedup`). After that we filtered this new dataset even more by keeping only entries whose domains end in `.ch`, `.org`, `.gov` and `.edu` since these are the most likely to have Romansh and German entries. Cosine similarities were then recomputed using this cleaner subset to improve alignment precision.
27
 
28
+ In total there were three different datasets used for alignment:
29
+ - Original Fineweb dataset
30
+ - Filtered dataset without long-tail domains and deduplication
31
+ - More filtered dataset where top-level domains are `.ch`, `.org`, `.gov` and `.edu`
32
 
 
33
 
34
+ ### Dataset statistics
35
+ **Original Dataset:**
36
+ - Total aligned document pairs: ?
37
+ - Total entries in embeddings:
38
+ - Romansh: 208321
39
+ - German: 333993
40
+ - Total tokens in embeddings:
41
+ - Romansh: 106,005,135
42
+ - German:
43
+ - Cosine similarity score distribution for original dataset:
44
+ - Mean: ?
45
+ - Median: ?
46
+ - Std Dev: ?
47
+ - Threshold range: 0.50 - 0.80
48
+
49
+ - Top domain: www.rtr.ch (79603 documents)
50
+
51
+ **Filtered:**
52
+ - Total aligned document pairs: ?
53
+ - Total entries in embeddings:
54
+ - Romansh: 208321
55
+ - German: 333993
56
+ - Total tokens in embeddings:
57
+ - Romansh: 101,263,110
58
+ - German:
59
+ - Number of total unique domains: 8106
60
+ - Cosine similarity score distribution for filtered dataset (no long tail and deuplication):
61
+ - Mean: ?
62
+ - Median: ?
63
+ - Std Dev: ?
64
+ - Threshold range: 0.50 - 0.80
65
+ - Long tail: 5582 domains with < 3 documents (removed)
66
+
67
+
68
+ **More Filtered:**
69
+ - Total aligned document pairs: ?
70
+ - Total entries in embeddings:
71
+ - Romansh: 175195
72
+ - German: 333993
73
+ - Total tokens in embeddings:
74
+ - Romansh: 86,917,433
75
+ - German: 333993 (unfiltered)
76
+ - Number of total unique domains: 8106
77
+ - Cosine similarity score distribution for filtered dataset (only `.ch` and `.org`):
78
+ - Mean: ?
79
+ - Median: ?
80
+ - Std Dev: ?
81
+ - Threshold range: 0.50 - 0.80
82
+
83
+ **General:**
84
+ - Token length limits: Embeddings were truncated at 8192 tokens (OpenAI model constraint)
85
+ - Sentence Length Penalties:
86
+ **Absolute penalty**:
87
+ `penalized_sim = cos_sim - α * abs(len_rm - len_de)`
88
+
89
+ **Relative penalty**:
90
+ `penalized_sim = cos_sim - α * (abs(len_rm - len_de) / max(len_rm, len_de))`
91
+
92
+
93
+ ## Results from Dataset Analysis
94
+
95
+ ### Domain Distribution
96
+
97
+ The domain distribution is highly imbalanced: the vast majority of URLs come from a small number of domains. Most notably, `www.rtr.ch` dominates the dataset with nearly 80,000 URLs, followed by `m.rtr.ch` and `www.gr.ch` with significantly fewer entries. All other domains appear less frequently, with fewer than 10,000 URLs each.
98
+
99
+ This indicates that the Romansh dataset is heavily influenced by a few large sources, primarily public institutions and media outlets such as Radiotelevisiun Svizra Rumantscha (RTR) and cantonal websites.While this concentration can benefit consistency and linguistic quality, it may also limit the variety of content and styles represented in the dataset. This imbalance should be considered in any downstream tasks, as it could influence model performance and generalizability.
100
+
101
+ ### How alignable is the dataset?
102
+
103
+ After manually examining the 50 randomly sampled URLs from above, I found that none of the 50 entries had a clearly alignable German counterpart.
104
+ To check alignments I relied on matching 1) named entities 2) dates and numbers 3) translations of phrases or words (with [Supertext](https://www.supertext.com/de-CH?gad_source=1&gad_campaignid=22600834922&gbraid=0AAAAAD-ejiqGvPoZBlBS3NZ_qVPUILwC0&gclid=CjwKCAjwp_LDBhBCEiwAK7FnkiyYnMO5qZN4ruGnqZ6qDR22rfVbOAtcFZoX4fkofquirt5hfXGLKBoCjVoQAvD_BwE)).
105
+
106
+ This suggests that parallel data in the Romansh subset is extremely rare, and that future alignment should rely on automated methods with robust filtering. This set of 50 samples now serves as a gold standard for evaluating automatic alignment thresholds.
107
 
108
  ## Methodology
109
 
110
+ 1. **Preprocessing**:
111
+ - German dataset filtered to only contain domains also appearing in Romansh dataset
112
+ 2. **Manual Evaluation**:
113
+ - A sample of 50 documents was manually checked to estimate alignability (see `analysis/manual_alignment`)
114
+ 3. **Embedding**:
115
+ - Used [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) `text-embedding-3-small` (with truncation at 8192 tokens)
116
+ 4. **Similarity Calculation**:
117
+ - Cosine similarity computed between Romansh and German document embeddings
118
+ - With or without penalizations for sentence length difference
119
 
 
 
120
 
121
  ## Threshold and Alpha Comparison
122
 
123
+ We compared cosine similarity thresholds (0.50–0.80) using two penalty settings for the original dataset: Absolute and relative sentence length.
124
+ - `ALPHA = None` and `LENGTH_PENALTY_TYPE = None` (no length regularization)
125
+ - `ALPHA=0.005` (absolute penalty)
126
+ - `LENGTH_PENALTY_TYPE = 0.05` (relative penalty)
127
+
128
+ REVISE
129
+ ALPHA=None produced more matches, especially at low thresholds, but with a steep quality drop as the threshold increased. Alpha=0.005 yielded fewer but more consistent matches.
130
+
131
+ To determine a suitable _similarity threshold_ for aligning sentence pairs, we tested multiple values and observed that a threshold of **0.60** provided the best balance for this dataset (refer to `./analysis/threshold_analysis/threshold_comparisons.py`).
132
 
 
 
133
 
134
  ### Dataset Quality Note
135
 
136
  As the dataset was aligned using a relatively low cosine similarity threshold (e.g. 0.60) to increase the number of matches it likely reduces alignment quality and may introduce noisy or weakly related sentence pairs. Further filtering or manual inspection would be necessary to improve reliability.
137
 
138
 
139
+ ## Format
140
+
141
+ Each `.jsonl` file contains entries with the following fields:
142
+
143
+ ```json
144
+ {
145
+ "romansh_text": " ... ",
146
+ "german_text": " ... ",
147
+ "romansh_url": " ...",
148
+ "german_url": " ...",
149
+ "similarity": float,
150
+ "original_similarity": float
151
+ }
152
+ ```
153
+
154
  ## Evaluation
155
 
156
  A negative gold standard was used to ensure quality: a set of Romansh documents known to have no valid German alignment. These were checked for false positives.
 
173
  )
174
  ```
175
 
176
+
177
+ ## Contents
178
+
179
+ The dataset consists of several alignment outputs based on different cosine similarity thresholds and sentence-length penalty:
180
+
181
+ parallel_data_unfiltered/
182
+
183
+ ├── matched_{THRESHOLD}_{ALPHA}_{LENGTH_PENALTY_TYPE}.jsonl
184
+
185
+ ├── matched_0.50_ 0.005_relative.jsonl
186
+ ├── matched_0.50_0.005_absolute.jsonl
187
+ ├── matched_0.50_none.jsonl
188
+ ...
189
+
190
+ With ALPHA being `0.005` or `None` and LENGTH_PENALTY_TYPE can be `"relative"`, `"absolute"` or `None`.
191
+ Same structure for `/parallel_data_filtered`.
192
+
193
  ## File Organization
194
 
195
  | File | Description |
196
  | --------------------- | ---------------------------------- |
197
+ | `parallel_data_unfiltered/` | All alignment with unfiltered dataset|
198
+ | `parallel_data_filtered/no_tail_dedup` | All alignments with filtered dataset (removed long tail and deduplicated)|
199
+ | `parallel_data_filtered/tld` | All alignments with more filtered dataset (only few tlds)|
200
  | `/parallel_data/matched_0.60_0.005.jsonl` | "best match" |
201
+ | `embedded/original` | All German and Romansh embeddings |
202
+ | `embedded/filtered_notail_dedup` | German and Romansh embeddings with removed long tail and deduplicated |
203
+ | `embedded/filtered_notail_dedup/filtered_topleveldomain` | German and Romansh embeddings only domains with `.ch` and `.org` |
204
+ | `original/` | Original data
205