|
|
--- |
|
|
title: Romansh–German Parallel Dataset (FineWeb-based) |
|
|
colorFrom: gray |
|
|
colorTo: red |
|
|
sdk: static |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: "parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl" |
|
|
default: true |
|
|
--- |
|
|
|
|
|
# Romansh–German Parallel Dataset (FineWeb-Based) |
|
|
|
|
|
This dataset contains automatically aligned Romansh–German document pairs, extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) using cosine similarity over OpenAI embeddings. It was created as part of a university programming project focused on document-level parallel data extraction. |
|
|
|
|
|
## Description |
|
|
|
|
|
This project performs document-level alignment between Romansh and German web texts, which were extracted from the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. It uses [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) embeddings and cosine similarity to identify potential parallel texts. |
|
|
The full dataset is available on [Hugging Face](https://huggingface.co/datasets/Sudehsna/Romansh_German_Parallel_Data). |
|
|
|
|
|
## Dataset |
|
|
|
|
|
This project uses the Romansh and German partitions of the [Fineweb2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) dataset. Both the original and the removed versions of the dataset were used to improve alignment coverage. |
|
|
|
|
|
Since the German dataset contained significantly more entries than the Romansh dataset, it was filtered by retaining only those entries whose domains matched those found in the Romansh dataset. |
|
|
|
|
|
|
|
|
**Dataset Filtering**: |
|
|
We first used a brute-force approach by computing cosine similarities between all Romansh and German embeddings without filtering (see `data/embedded/original` and the initial outputs in `data/parallel_data_unfiltered/`on Huggingface). After evaluating the quality of these matches, we refined the Romansh embeddings by removing entries from long-tail domains (i.e., domains with fewer than 3 documents) and deduplicating (`data/embedded/filtered_notail_dedup`). After that we filtered this new dataset even more by keeping only entries whose domains end in `.ch`, `.org`, `.gov` and `.edu` since these are the most likely to have Romansh and German entries. Cosine similarities were then recomputed using this cleaner subset to improve alignment precision. |
|
|
|
|
|
In total there were three different datasets used for alignment: |
|
|
- Original Fineweb dataset |
|
|
- Filtered dataset without long-tail domains and deduplication |
|
|
- More filtered dataset where top-level domains are `.ch`, `.org`, `.gov` and `.edu` |
|
|
|
|
|
|
|
|
### Dataset statistics |
|
|
**Original Dataset:** |
|
|
- Best aligned data: `matched_0.60_0.005_relative.jsonl` |
|
|
- Total aligned document pairs: 54350 |
|
|
- Romansh tokens: 42,753,157 |
|
|
- German tokens: 117,237,879 |
|
|
- Total tokens: 159,991,036 |
|
|
- Total entries in embeddings: |
|
|
- Romansh: 208321 |
|
|
- German: 106559 |
|
|
- Number of total unique domains: 8106 |
|
|
- Total tokens in embeddings: |
|
|
- Romansh: 106,005,135 |
|
|
- German: 203,258,736 |
|
|
- Cosine similarity score distribution: |
|
|
- Mean: 0.6538025706540365 |
|
|
- Median: 0.6411055326461792 |
|
|
- Std Dev: 0.046781550997915436 |
|
|
- Threshold range: 0.50 - 0.80 |
|
|
|
|
|
 |
|
|
|
|
|
**Filtered:** |
|
|
- Best aligned data: `matched_0.60_0.005_relative.jsonl` |
|
|
- Total aligned document pairs: 53486 |
|
|
- Romansh tokens: 41,097,668 |
|
|
- German tokens: 114,789,489 |
|
|
- Total tokens: 155,887,157 |
|
|
- Total entries in embeddings: |
|
|
- Romansh: 200226 |
|
|
- German: 106559 |
|
|
- Total tokens in embeddings: |
|
|
- Romansh: 101,263,110 |
|
|
- German: 203,258,736 |
|
|
- Number of total unique domains: 2036 |
|
|
- Cosine similarity score distribution: |
|
|
- Mean: 0.6538377276113002 |
|
|
- Median: 0.6410927772521973 |
|
|
- Std Dev: 0.04682630685713275 |
|
|
- Threshold range: 0.50 - 0.80 |
|
|
- Long tail: 5582 domains with < 3 documents (removed) |
|
|
|
|
|
 |
|
|
|
|
|
**More Filtered:** |
|
|
- Best aligned data: `matched_0.60_0.005_relative.jsonl` |
|
|
- Total aligned document pairs: 51958 |
|
|
- Romansh tokens: 39,149,607 |
|
|
- German tokens: 110,733,850 |
|
|
- Total tokens: 149,883,457 |
|
|
- Total entries in embeddings: |
|
|
- Romansh: 175195 |
|
|
- German: 106559 |
|
|
- Total tokens in embeddings: |
|
|
- Romansh: 86,917,433 |
|
|
- German: 203,258,736 |
|
|
- Number of total unique domains: 943 |
|
|
- Cosine similarity score distribution: |
|
|
- Mean: 0.653875970593897 |
|
|
- Median: 0.6411054776315812 |
|
|
- Std Dev: 0.046871048829964373 |
|
|
- Threshold range: 0.50 - 0.80 |
|
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
|
**General:** |
|
|
- Token length limits: Embeddings were truncated at 8192 tokens (OpenAI model constraint) |
|
|
- Sentence Length Penalties: |
|
|
|
|
|
**Absolute penalty**: |
|
|
`penalized_sim = cos_sim - α * abs(len_rm - len_de)` |
|
|
|
|
|
**Relative penalty**: |
|
|
`penalized_sim = cos_sim - α * (abs(len_rm - len_de) / max(len_rm, len_de))` |
|
|
|
|
|
Overall best aligned parallel data: `parallel_data_filtered/tld/matched_0.60_0.005_relative.jsonl` |
|
|
The most filtered dataset combined with a relative length penalty (ALPHA=0.005) and a similarity threshold of 0.60 yielded the best parallel data. |
|
|
The heavy filtering removes noisy sentences before alignment, the relative penalty reduces mismatches without discarding too many valid pairs, and the 0.60 threshold balances precision and coverage. This setting consistently produced the cleanest and most reliable alignments in our comparisons. |
|
|
|
|
|
|
|
|
## Threshold and Alpha Comparison |
|
|
|
|
|
We tested cosine similarity thresholds (0.50–0.80) with two length-penalty settings: absolute (ALPHA=0.005) and relative (ALPHA=0.005). Across all datasets, the relative penalty consistently produced more matches, as it scales with sentence length and is less restrictive. |
|
|
|
|
|
The absolute penalty sharply reduced matches, especially above 0.60, while the relative penalty maintained higher counts but declined with increasing thresholds. In all datasets, 0.50–0.60 offered the best trade-off between quantity and precision, with the relative penalty being the more inclusive option and the absolute penalty the most conservative. |
|
|
|
|
|
|
|
|
## Results from Dataset Analysis |
|
|
|
|
|
### Domain Distribution of Original Dataset |
|
|
|
|
|
The domain distribution is highly imbalanced: the vast majority of URLs come from a small number of domains. Most notably, `www.rtr.ch` dominates the dataset with nearly 80,000 URLs, followed by `m.rtr.ch` and `www.gr.ch` with significantly fewer entries. All other domains appear less frequently, with fewer than 10,000 URLs each. |
|
|
|
|
|
This indicates that the Romansh dataset is heavily influenced by a few large sources, primarily public institutions and media outlets such as Radiotelevisiun Svizra Rumantscha (RTR) and cantonal websites. While this concentration can benefit consistency and linguistic quality, it may also limit the variety of content and styles represented in the dataset. This imbalance should be considered in any downstream tasks, as it could influence model performance and generalizability. |
|
|
 |
|
|
|
|
|
### Domain Distribution of Final Dataset |
|
|
|
|
|
The final aligned dataset (51'958 document pairs) still shows the same imbalance in domain distribution. |
|
|
As in the original corpus, `www.rtr.ch` and `m.rtr.ch` dominate the data, together accounting for |
|
|
the majority of documents. Other domains such as `www.gr.ch`, `www.engadinerpost.ch`, and |
|
|
`rm.wikipedia.org` contribute smaller proportions, with most domains appearing only rarely. |
|
|
|
|
|
This again indicates that, although the filtering step likely improved the quality of parallel content, the final dataset remains highly imbalanced. |
|
|
|
|
|
 |
|
|
|
|
|
|
|
|
### How alignable is the dataset? |
|
|
|
|
|
After manually examining the 50 randomly sampled URLs from above, I found that none of the 50 entries had a clearly alignable German counterpart. |
|
|
To check alignments I relied on matching 1) named entities 2) dates and numbers 3) translations of phrases or words (with [Supertext](https://www.supertext.com/de-CH?gad_source=1&gad_campaignid=22600834922&gbraid=0AAAAAD-ejiqGvPoZBlBS3NZ_qVPUILwC0&gclid=CjwKCAjwp_LDBhBCEiwAK7FnkiyYnMO5qZN4ruGnqZ6qDR22rfVbOAtcFZoX4fkofquirt5hfXGLKBoCjVoQAvD_BwE)). |
|
|
|
|
|
This suggests that parallel data in the Romansh subset is extremely rare, and that future alignment should rely on automated methods with robust filtering. This set of 50 samples now serves as a gold standard for evaluating automatic alignment thresholds. |
|
|
|
|
|
### Dataset Quality Note |
|
|
|
|
|
As the dataset was aligned using a relatively low cosine similarity threshold (e.g. 0.60) to increase the number of matches it likely reduces alignment quality and may introduce noisy or weakly related sentence pairs. Further filtering or manual inspection would be necessary to improve reliability. |
|
|
|
|
|
|
|
|
## Methodology |
|
|
|
|
|
1. **Preprocessing**: |
|
|
- German dataset filtered to only contain domains also appearing in Romansh dataset |
|
|
2. **Manual Evaluation**: |
|
|
- A sample of 50 documents was manually checked to estimate alignability (see `analysis/manual_alignment`) |
|
|
3. **Embedding**: |
|
|
- Used [OpenAI](https://platform.openai.com/docs/models/text-embedding-3-small) `text-embedding-3-small` (with truncation at 8192 tokens) |
|
|
4. **Similarity Calculation**: |
|
|
- Cosine similarity computed between Romansh and German document embeddings |
|
|
- With or without penalizations for sentence length difference |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
A negative gold standard was used to ensure quality: a set of Romansh documents known to have no valid German alignment. These were checked for false positives. |
|
|
|
|
|
Unit tests confirmed: |
|
|
|
|
|
- No false matches (false positives) for known non-aligned documents |
|
|
|
|
|
- Low similarity scores for false-alignment candidates |
|
|
|
|
|
## Download |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset( |
|
|
"Sudeshna/Romansh_German_Parallel_Data", |
|
|
data_files="parallel_data/matched_0.60_0.005.jsonl", |
|
|
split="train" |
|
|
) |
|
|
``` |
|
|
|
|
|
## Contents |
|
|
|
|
|
The dataset consists of several alignment outputs based on different cosine similarity thresholds and sentence-length penalties: |
|
|
|
|
|
parallel_data_unfiltered/ |
|
|
|
|
|
├── matched_{THRESHOLD}_{ALPHA}_{LENGTH_PENALTY_TYPE}.jsonl |
|
|
|
|
|
├── matched_0.50_ 0.005_relative.jsonl |
|
|
|
|
|
├── matched_0.50_0.005_absolute.jsonl |
|
|
|
|
|
|
|
|
... |
|
|
|
|
|
With ALPHA being `0.005` and LENGTH_PENALTY_TYPE either `"relative"`, `"absolute"`. |
|
|
|
|
|
For `/parallel_data_filtered/` we chose to always include a penalty. |
|
|
|
|
|
### File Organization |
|
|
|
|
|
| File | Description | |
|
|
| --------------------- | ---------------------------------- | |
|
|
| `parallel_data_unfiltered/` | All alignment with unfiltered dataset| |
|
|
| `parallel_data_filtered/no_tail_dedup` | All alignments with filtered dataset (removed long tail and deduplicated)| |
|
|
| `parallel_data_filtered/tld` | All alignments with more filtered dataset (only few tlds)| |
|
|
| `parallel_data_filtered/tld/matched_0.60_0.005.jsonl` | Best matched parallel data | |
|
|
| `embedded/original` | All German and Romansh embeddings | |
|
|
| `embedded/filtered_notail_dedup` | Embeddings with removed long tail and deduplicated | |
|
|
| `embedded/filtered_tld` | Embedding only with chosen TLDs| |
|
|
| `fineweb_original/` | Original data |
|
|
|
|
|
|
|
|
## Future Work |
|
|
Further improvements could explore the use of alternative ALPHA values and experiment with different filtering methods to refine data quality. Other directions include testing different penalty calculation types, incorporating other alignment approaches and applying domain-specific preprocessing to optimize results for different datasets and use cases. |
|
|
|
|
|
|