Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ colorTo: red
|
|
| 5 |
sdk: static
|
| 6 |
configs:
|
| 7 |
- config_name: default
|
| 8 |
-
data_files:
|
| 9 |
default: true
|
| 10 |
---
|
| 11 |
|
|
@@ -18,7 +18,6 @@ This dataset contains automatically aligned Romansh–German document pairs, ext
|
|
| 18 |
This project performs document-level alignment between Romansh and German web texts, which were extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. It uses [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) embeddings and cosine similarity to identify potential parallel texts.
|
| 19 |
The full dataset is available on [Hugging Face](https://huggingface.co/datasets/Sudehsna/Romansh_German_Parallel_Data).
|
| 20 |
|
| 21 |
-
|
| 22 |
## Dataset
|
| 23 |
|
| 24 |
This project uses the Romansh and German partitions of the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. Both the original and the removed versions of the dataset were used to improve alignment coverage.
|
|
@@ -37,71 +36,92 @@ In total there were three different datasets used for alignment:
|
|
| 37 |
|
| 38 |
### Dataset statistics
|
| 39 |
**Original Dataset:**
|
| 40 |
-
-
|
|
|
|
| 41 |
- Total entries in embeddings:
|
| 42 |
- Romansh: 208321
|
| 43 |
- German: 106559
|
|
|
|
| 44 |
- Total tokens in embeddings:
|
| 45 |
- Romansh: 106,005,135
|
| 46 |
-
- German:
|
| 47 |
-
- Cosine similarity score distribution
|
| 48 |
-
- Mean:
|
| 49 |
-
- Median:
|
| 50 |
-
- Std Dev:
|
| 51 |
- Threshold range: 0.50 - 0.80
|
| 52 |
|
| 53 |
-
|
| 54 |
|
| 55 |
**Filtered:**
|
| 56 |
-
-
|
|
|
|
| 57 |
- Total entries in embeddings:
|
| 58 |
-
- Romansh:
|
| 59 |
- German: 106559
|
| 60 |
- Total tokens in embeddings:
|
| 61 |
- Romansh: 101,263,110
|
| 62 |
-
- German:
|
| 63 |
-
- Number of total unique domains:
|
| 64 |
-
- Cosine similarity score distribution
|
| 65 |
-
- Mean:
|
| 66 |
-
- Median:
|
| 67 |
-
- Std Dev:
|
| 68 |
- Threshold range: 0.50 - 0.80
|
| 69 |
- Long tail: 5582 domains with < 3 documents (removed)
|
| 70 |
|
|
|
|
| 71 |
|
| 72 |
**More Filtered:**
|
| 73 |
-
-
|
|
|
|
| 74 |
- Total entries in embeddings:
|
| 75 |
- Romansh: 175195
|
| 76 |
-
- German: 106559
|
| 77 |
- Total tokens in embeddings:
|
| 78 |
- Romansh: 86,917,433
|
| 79 |
-
- German:
|
| 80 |
-
- Number of total unique domains:
|
| 81 |
-
- Cosine similarity score distribution
|
| 82 |
-
- Mean:
|
| 83 |
-
- Median:
|
| 84 |
-
- Std Dev:
|
| 85 |
- Threshold range: 0.50 - 0.80
|
| 86 |
|
| 87 |
-
|
|
|
|
|
|
|
|
|
|
| 88 |
- Token length limits: Embeddings were truncated at 8192 tokens (OpenAI model constraint)
|
| 89 |
- Sentence Length Penalties:
|
| 90 |
|
| 91 |
-
|
| 92 |
`penalized_sim = cos_sim - α * abs(len_rm - len_de)`
|
| 93 |
|
| 94 |
-
|
| 95 |
`penalized_sim = cos_sim - α * (abs(len_rm - len_de) / max(len_rm, len_de))`
|
| 96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 97 |
|
| 98 |
## Results from Dataset Analysis
|
| 99 |
|
| 100 |
-
### Domain Distribution
|
| 101 |
|
| 102 |
The domain distribution is highly imbalanced: the vast majority of URLs come from a small number of domains. Most notably, `www.rtr.ch` dominates the dataset with nearly 80,000 URLs, followed by `m.rtr.ch` and `www.gr.ch` with significantly fewer entries. All other domains appear less frequently, with fewer than 10,000 URLs each.
|
| 103 |
|
| 104 |
This indicates that the Romansh dataset is heavily influenced by a few large sources, primarily public institutions and media outlets such as Radiotelevisiun Svizra Rumantscha (RTR) and cantonal websites.While this concentration can benefit consistency and linguistic quality, it may also limit the variety of content and styles represented in the dataset. This imbalance should be considered in any downstream tasks, as it could influence model performance and generalizability.
|
|
|
|
| 105 |
|
| 106 |
### How alignable is the dataset?
|
| 107 |
|
|
@@ -110,6 +130,11 @@ To check alignments I relied on matching 1) named entities 2) dates and numbers
|
|
| 110 |
|
| 111 |
This suggests that parallel data in the Romansh subset is extremely rare, and that future alignment should rely on automated methods with robust filtering. This set of 50 samples now serves as a gold standard for evaluating automatic alignment thresholds.
|
| 112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
## Methodology
|
| 114 |
|
| 115 |
1. **Preprocessing**:
|
|
@@ -122,40 +147,6 @@ This suggests that parallel data in the Romansh subset is extremely rare, and th
|
|
| 122 |
- Cosine similarity computed between Romansh and German document embeddings
|
| 123 |
- With or without penalizations for sentence length difference
|
| 124 |
|
| 125 |
-
|
| 126 |
-
## Threshold and Alpha Comparison
|
| 127 |
-
|
| 128 |
-
We compared cosine similarity thresholds (0.50–0.80) using two penalty settings for the original dataset: Absolute and relative sentence length.
|
| 129 |
-
- `ALPHA = None` and `LENGTH_PENALTY_TYPE = None` (no length regularization)
|
| 130 |
-
- `ALPHA=0.005` (absolute penalty)
|
| 131 |
-
- `LENGTH_PENALTY_TYPE = 0.05` (relative penalty)
|
| 132 |
-
|
| 133 |
-
REVISE
|
| 134 |
-
ALPHA=None produced more matches, especially at low thresholds, but with a steep quality drop as the threshold increased. Alpha=0.005 yielded fewer but more consistent matches.
|
| 135 |
-
|
| 136 |
-
To determine a suitable _similarity threshold_ for aligning sentence pairs, we tested multiple values and observed that a threshold of **0.60** provided the best balance for this dataset (refer to `./analysis/threshold_analysis/threshold_comparisons.py`).
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
### Dataset Quality Note
|
| 140 |
-
|
| 141 |
-
As the dataset was aligned using a relatively low cosine similarity threshold (e.g. 0.60) to increase the number of matches it likely reduces alignment quality and may introduce noisy or weakly related sentence pairs. Further filtering or manual inspection would be necessary to improve reliability.
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
## Format
|
| 145 |
-
|
| 146 |
-
Each `.jsonl` file contains entries with the following fields:
|
| 147 |
-
|
| 148 |
-
```json
|
| 149 |
-
{
|
| 150 |
-
"romansh_text": " ... ",
|
| 151 |
-
"german_text": " ... ",
|
| 152 |
-
"romansh_url": " ...",
|
| 153 |
-
"german_url": " ...",
|
| 154 |
-
"similarity": float,
|
| 155 |
-
"original_similarity": float
|
| 156 |
-
}
|
| 157 |
-
```
|
| 158 |
-
|
| 159 |
## Evaluation
|
| 160 |
|
| 161 |
A negative gold standard was used to ensure quality: a set of Romansh documents known to have no valid German alignment. These were checked for false positives.
|
|
@@ -194,9 +185,9 @@ parallel_data_unfiltered/
|
|
| 194 |
|
| 195 |
...
|
| 196 |
|
| 197 |
-
With ALPHA being `0.005`
|
| 198 |
|
| 199 |
-
For `/parallel_data_filtered` we chose to always include a penalty.
|
| 200 |
|
| 201 |
### File Organization
|
| 202 |
|
|
@@ -205,10 +196,10 @@ For `/parallel_data_filtered` we chose to always include a penalty.
|
|
| 205 |
| `parallel_data_unfiltered/` | All alignment with unfiltered dataset|
|
| 206 |
| `parallel_data_filtered/no_tail_dedup` | All alignments with filtered dataset (removed long tail and deduplicated)|
|
| 207 |
| `parallel_data_filtered/tld` | All alignments with more filtered dataset (only few tlds)|
|
| 208 |
-
| `
|
| 209 |
| `embedded/original` | All German and Romansh embeddings |
|
| 210 |
| `embedded/filtered_notail_dedup` | Embeddings with removed long tail and deduplicated |
|
| 211 |
-
| `embedded/
|
| 212 |
| `fineweb_original/` | Original data
|
| 213 |
|
| 214 |
|
|
|
|
| 5 |
sdk: static
|
| 6 |
configs:
|
| 7 |
- config_name: default
|
| 8 |
+
data_files: parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl"
|
| 9 |
default: true
|
| 10 |
---
|
| 11 |
|
|
|
|
| 18 |
This project performs document-level alignment between Romansh and German web texts, which were extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. It uses [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) embeddings and cosine similarity to identify potential parallel texts.
|
| 19 |
The full dataset is available on [Hugging Face](https://huggingface.co/datasets/Sudehsna/Romansh_German_Parallel_Data).
|
| 20 |
|
|
|
|
| 21 |
## Dataset
|
| 22 |
|
| 23 |
This project uses the Romansh and German partitions of the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. Both the original and the removed versions of the dataset were used to improve alignment coverage.
|
|
|
|
| 36 |
|
| 37 |
### Dataset statistics
|
| 38 |
**Original Dataset:**
|
| 39 |
+
- Best aligned data: `matched_0.60_0.005_relative.jsonl`
|
| 40 |
+
- Total aligned document pairs: 54350
|
| 41 |
- Total entries in embeddings:
|
| 42 |
- Romansh: 208321
|
| 43 |
- German: 106559
|
| 44 |
+
- Number of total unique domains: 8106
|
| 45 |
- Total tokens in embeddings:
|
| 46 |
- Romansh: 106,005,135
|
| 47 |
+
- German: 203,258,736
|
| 48 |
+
- Cosine similarity score distribution:
|
| 49 |
+
- Mean: 0.6538025706540365
|
| 50 |
+
- Median: 0.6411055326461792
|
| 51 |
+
- Std Dev: 0.046781550997915436
|
| 52 |
- Threshold range: 0.50 - 0.80
|
| 53 |
|
| 54 |
+

|
| 55 |
|
| 56 |
**Filtered:**
|
| 57 |
+
- Best aligned data: `matched_0.60_0.005_relative.jsonl`
|
| 58 |
+
- Total aligned document pairs: 53486
|
| 59 |
- Total entries in embeddings:
|
| 60 |
+
- Romansh: 200226
|
| 61 |
- German: 106559
|
| 62 |
- Total tokens in embeddings:
|
| 63 |
- Romansh: 101,263,110
|
| 64 |
+
- German: 203,258,736
|
| 65 |
+
- Number of total unique domains: 2036
|
| 66 |
+
- Cosine similarity score distribution:
|
| 67 |
+
- Mean: 0.6538377276113002
|
| 68 |
+
- Median: 0.6410927772521973
|
| 69 |
+
- Std Dev: 0.04682630685713275
|
| 70 |
- Threshold range: 0.50 - 0.80
|
| 71 |
- Long tail: 5582 domains with < 3 documents (removed)
|
| 72 |
|
| 73 |
+

|
| 74 |
|
| 75 |
**More Filtered:**
|
| 76 |
+
- Best aligned data: `matched_0.60_0.005_relative.jsonl`
|
| 77 |
+
- Total aligned document pairs: 51958
|
| 78 |
- Total entries in embeddings:
|
| 79 |
- Romansh: 175195
|
| 80 |
+
- German: 106559
|
| 81 |
- Total tokens in embeddings:
|
| 82 |
- Romansh: 86,917,433
|
| 83 |
+
- German: 203,258,736
|
| 84 |
+
- Number of total unique domains: 943
|
| 85 |
+
- Cosine similarity score distribution:
|
| 86 |
+
- Mean: 0.653875970593897
|
| 87 |
+
- Median: 0.6411054776315812
|
| 88 |
+
- Std Dev: 0.046871048829964373
|
| 89 |
- Threshold range: 0.50 - 0.80
|
| 90 |
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
**General:**
|
| 95 |
- Token length limits: Embeddings were truncated at 8192 tokens (OpenAI model constraint)
|
| 96 |
- Sentence Length Penalties:
|
| 97 |
|
| 98 |
+
**Absolute penalty**:
|
| 99 |
`penalized_sim = cos_sim - α * abs(len_rm - len_de)`
|
| 100 |
|
| 101 |
+
**Relative penalty**:
|
| 102 |
`penalized_sim = cos_sim - α * (abs(len_rm - len_de) / max(len_rm, len_de))`
|
| 103 |
|
| 104 |
+
Overall best aligned parallel data: `parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl`
|
| 105 |
+
|
| 106 |
+
The most filtered dataset combined with a relative length penalty (ALPHA=0.005) and a similarity threshold of 0.60 is the best aligned parallel data.
|
| 107 |
+
The heavy filtering removes noisy sentences before alignment, the relative penalty reduces mismatches without discarding too many valid pairs, and the 0.60 threshold balances precision and coverage. This setting consistently produced the cleanest and most reliable alignments in our comparisons.
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
## Threshold and Alpha Comparison
|
| 111 |
+
|
| 112 |
+
We tested cosine similarity thresholds (0.50–0.80) with two length-penalty settings: absolute (ALPHA=0.005) and relative (ALPHA=0.005). Across all datasets, the relative penalty consistently produced more matches, as it scales with sentence length and is less restrictive.
|
| 113 |
+
|
| 114 |
+
The absolute penalty sharply reduced matches, especially above 0.55, while the relative penalty maintained higher counts but declined with increasing thresholds. In all datasets, 0.50–0.60 offered the best trade-off between quantity and precision, with the relative penalty being the more inclusive option and the absolute penalty the most conservative.
|
| 115 |
+
|
| 116 |
|
| 117 |
## Results from Dataset Analysis
|
| 118 |
|
| 119 |
+
### Domain Distribution of Original Dataset
|
| 120 |
|
| 121 |
The domain distribution is highly imbalanced: the vast majority of URLs come from a small number of domains. Most notably, `www.rtr.ch` dominates the dataset with nearly 80,000 URLs, followed by `m.rtr.ch` and `www.gr.ch` with significantly fewer entries. All other domains appear less frequently, with fewer than 10,000 URLs each.
|
| 122 |
|
| 123 |
This indicates that the Romansh dataset is heavily influenced by a few large sources, primarily public institutions and media outlets such as Radiotelevisiun Svizra Rumantscha (RTR) and cantonal websites.While this concentration can benefit consistency and linguistic quality, it may also limit the variety of content and styles represented in the dataset. This imbalance should be considered in any downstream tasks, as it could influence model performance and generalizability.
|
| 124 |
+

|
| 125 |
|
| 126 |
### How alignable is the dataset?
|
| 127 |
|
|
|
|
| 130 |
|
| 131 |
This suggests that parallel data in the Romansh subset is extremely rare, and that future alignment should rely on automated methods with robust filtering. This set of 50 samples now serves as a gold standard for evaluating automatic alignment thresholds.
|
| 132 |
|
| 133 |
+
### Dataset Quality Note
|
| 134 |
+
|
| 135 |
+
As the dataset was aligned using a relatively low cosine similarity threshold (e.g. 0.60) to increase the number of matches it likely reduces alignment quality and may introduce noisy or weakly related sentence pairs. Further filtering or manual inspection would be necessary to improve reliability.
|
| 136 |
+
|
| 137 |
+
|
| 138 |
## Methodology
|
| 139 |
|
| 140 |
1. **Preprocessing**:
|
|
|
|
| 147 |
- Cosine similarity computed between Romansh and German document embeddings
|
| 148 |
- With or without penalizations for sentence length difference
|
| 149 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
## Evaluation
|
| 151 |
|
| 152 |
A negative gold standard was used to ensure quality: a set of Romansh documents known to have no valid German alignment. These were checked for false positives.
|
|
|
|
| 185 |
|
| 186 |
...
|
| 187 |
|
| 188 |
+
With ALPHA being `0.005` and LENGTH_PENALTY_TYPE either `"relative"`, `"absolute"`.
|
| 189 |
|
| 190 |
+
For `/parallel_data_filtered/` we chose to always include a penalty.
|
| 191 |
|
| 192 |
### File Organization
|
| 193 |
|
|
|
|
| 196 |
| `parallel_data_unfiltered/` | All alignment with unfiltered dataset|
|
| 197 |
| `parallel_data_filtered/no_tail_dedup` | All alignments with filtered dataset (removed long tail and deduplicated)|
|
| 198 |
| `parallel_data_filtered/tld` | All alignments with more filtered dataset (only few tlds)|
|
| 199 |
+
| `parallel_data_filtered/tld/matched_0.60_0.005.jsonl` | Best matched parallel data |
|
| 200 |
| `embedded/original` | All German and Romansh embeddings |
|
| 201 |
| `embedded/filtered_notail_dedup` | Embeddings with removed long tail and deduplicated |
|
| 202 |
+
| `embedded/filtered_tld` | Embedding only with chosen TLDs|
|
| 203 |
| `fineweb_original/` | Original data
|
| 204 |
|
| 205 |
|