Qasim522 Mavkif commited on
Commit
1dafcd7
·
verified ·
0 Parent(s):

Duplicate from Mavkif/Roman-Urdu-Parl-split

Browse files

Co-authored-by: Umer <Mavkif@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *.csv filter=lfs diff=lfs merge=lfs -text
2
+ *.txt filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - translation
5
+ language:
6
+ - ur
7
+ tags:
8
+ - urdu
9
+ - translation
10
+ - Transliteration
11
+ - parallel
12
+ - dataset
13
+ pretty_name: Roman Urdu Parl
14
+ size_categories:
15
+ - 1M<n<10M
16
+ ---
17
+
18
+ # Roman Urdu Parallel Dataset - Split
19
+
20
+ This dataset is just another version of Roman-Urdu-Parl dataset split into train, validation and test set properly. Details follow below.
21
+ This repository contains a split version of the Roman-Urdu Parallel Dataset (Roman-Urdu-Parl) structured specifically to facilitate fair evaluation in machine transliteration tasks between Urdu and Roman-Urdu.
22
+ The Roman-Urdu language lacks standard orthography, leading to a wide range of transliteration variations for the same Urdu sentence.
23
+ This dataset addresses the need for non-overlapping train, test, and validation sets, mitigating data leakage that could otherwise inflate evaluation metrics.
24
+
25
+
26
+ ## Original Dataset Overview
27
+ The original Roman-Urdu-Parl dataset consists of 6,365,808 rows, featuring parallel sentences in Urdu and Roman-Urdu.
28
+ These numbers can be replicated using the script at: scripts/check_stats.py
29
+ The unique characteristics of the original dataset are as follows:
30
+
31
+ - Unique Urdu sentences: 1,087,220
32
+ - Unique Roman-Urdu sentences: 3,999,102
33
+ - Rows where both sentences match: 167
34
+ - Rows where both sentences differ: 6,365,641
35
+ - Urdu sentences appearing only once: 90,637
36
+ - Roman-Urdu sentences appearing only once: 3,165,765
37
+ - Unique pairs of Urdu and Roman-Urdu sentences: 4,003,784
38
+ - Short sentences (fewer than 3 words): 2,321
39
+
40
+ ## Motivation for a Structured Splitting Approach
41
+ ### Summary
42
+ In the original dataset, variations in Roman-Urdu transliterations of the same Urdu sentence pose a risk of data leakage if randomly split into training, validation, and test sets.
43
+ Random splits may cause the same Urdu sentence (or its Roman-Urdu variations) to appear in multiple sets, leading the model to "memorize" rather than generalize transliteration patterns.
44
+ This structured split addresses the issue by ensuring unique sets with no overlap between training, validation, and test sets, thus promoting generalization.
45
+ The script to split the dataset is at : splitting_rup_data.py
46
+
47
+ ### Detailed Motivation
48
+ The Roman-Urdu language is an informal, non-standardized way of writing Urdu using the Roman alphabet, which allows for considerable variation in transliteration.
49
+ Different speakers may transliterate the same Urdu sentence in numerous ways, leading to a high degree of variability in Roman-Urdu spellings.
50
+ For instance, words like “کتاب” in Urdu can be written in Roman-Urdu as “kitaab,” “kitab,” or “kittab,” depending on the writer’s style and regional dialect influences.
51
+
52
+ The original Roman-Urdu Parallel (Roman-Urdu-Parl) dataset takes advantage of this variability to create a large corpus of 6.365 million parallel sentences by pairing 1.1 million core Urdu sentences with multiple Roman-Urdu transliterations.
53
+ This variability-rich approach is invaluable for developing robust transliteration models.
54
+ However, this very characteristic introduces significant challenges for data splitting and evaluation, especially if the goal is to build a model capable of generalizing rather than memorizing specific transliterations.
55
+
56
+ ### Issues with Random Splitting
57
+ A typical random splitting approach might divide the dataset into training, validation, and test sets without accounting for sentence variability.
58
+ This could lead to the following issues:
59
+
60
+ 1. Overlap of Sentence Variations Across Sets:
61
+ Given the substantial transliteration variability, random splitting is likely to place different Roman-Urdu variations of the same Urdu sentence in multiple sets (e.g., training and test). As a result:
62
+ a. Data Leakage: The model may encounter different transliterations of the same sentence across training and evaluation sets, effectively "seeing" a portion of the test or validation data during training. This exposure creates data leakage, enabling the model to memorize specific transliteration patterns rather than learning to generalize.
63
+ b. Inflated Evaluation Scores: Due to data leakage, the model’s evaluation metrics—such as BLEU scores—could be artificially inflated. These metrics would then fail to reflect the model's true performance on genuinely unseen data, compromising the reliability of model assessments.
64
+
65
+ 2. Challenges in Model Generalization:
66
+ If the same Urdu sentence (with different transliterations) appears in both training and evaluation sets, the model risks overfitting to common patterns rather than developing a nuanced understanding of transliteration rules.
67
+ The model's performance on genuinely novel sentence structures and transliteration styles is therefore likely to be less reliable.
68
+
69
+ ### Importance of a Structured Split
70
+ To address these issues, a structured data split is essential.
71
+ By creating a split that ensures no Urdu sentence and its variations appear in more than one set (training, validation, or test).
72
+
73
+ ## Dataset Splitting Strategy
74
+ To prevent data leakage and ensure a balanced evaluation, the dataset is split according to the following strategy:
75
+
76
+ 1. Unique Sentence Selection for Validation and Test Sets:
77
+ Validation Set: 1,000 unique Urdu sentences with only one variation in the whole data of 6.3 million.
78
+ Test Set: Another 1,000 unique Urdu sentences (also with one variation) excluding those in the validation set.
79
+
80
+ 2. Replicated Sentence Selection (2-10 Variations):
81
+ Validation Set: 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations. All variations these 2000 urdu sentences are added in this.
82
+ Test Set: An additional 2,000 Urdu sentences with 2-10 variations, ensuring no overlap with validation.
83
+
84
+ 3. Training Set Composition:
85
+ All remaining sentences and their variations, excluding those selected for the validation and test sets, are included in the training set.
86
+
87
+ 4. Smaller Subsets for Efficient Evaluation:
88
+ Smaller Validation and Test Sets: These subsets are created to speed up evaluations during model development, with each containing 3,000 sentences (unique and replicated).
89
+ Its also ensured that only one sentence is selected out of all the variations of an urdu sentence.
90
+
91
+ ## Checking if Training Sentences Are Composed of Test Sentences or Their Repetitions
92
+ To ensure robust model performance and prevent data leakage, we checked if any training sentences were composed entirely of repeated test(or validation) sentences or their fragments.
93
+ In the original dataset, some sentences were simply repetitions of shorter sentences (e.g., "A B C D" and "A B C D A B C D"), which could lead to overfitting if these patterns appeared across training and test sets.
94
+ A Python script was developed to identify and flag these cases, allowing us to remove or separate such repetitive sentences across the dataset splits. This approach helps the model learn genuine transliteration patterns and avoids artificially inflated evaluation scores, promoting better generalization to unseen data.
95
+ The python script is at: scripts/check_substring.py
96
+ with our new split dataset this count is zero.
97
+
98
+ ## Key Features of the Split
99
+
100
+ - No Overlap Between Sets: Ensures that no Urdu sentence (or its variations) appears in more than one set, effectively preventing data leakage.
101
+ - Variation Inclusion: The comprehensive test and validation sets include all variations of selected Urdu sentences, providing a robust evaluation of the model's ability to handle transliteration diversity.
102
+ - Smaller Subsets for Rapid Testing: Allows for quick testing during model development while preserving dataset integrity.
103
+ - Random Sampling with Fixed Seed: Reproducibility is ensured by using a fixed random state.
104
+ - Balanced Evaluation: Incorporates both unique sentences and replicated sentences with multiple variations for a complete assessment.
105
+ - Data Integrity Checks: Verifies that no Urdu sentences are shared between the sets, ensuring an accurate measure of generalization.
106
+
107
+ ## Citation
108
+
109
+ Original Dataset paper:
110
+
111
+ @article{alam2022roman,
112
+ title={Roman-urdu-parl: Roman-urdu and urdu parallel corpus for urdu language understanding},
113
+ author={Alam, Mehreen and Hussain, Sibt Ul},
114
+ journal={Transactions on Asian and Low-Resource Language Information Processing},
115
+ volume={21},
116
+ number={1},
117
+ pages={1--20},
118
+ year={2022},
119
+ publisher={ACM New York, NY}
120
+ }
121
+
122
+ ## Dataset Card Authors [optional]
123
+
124
+ I wouldnt call myself the author of this dataset because its the work of the greats. I just had to work on this data, so I created the splits properly.
125
+ Still if you are interested then my name is Umer (you can contact me on linkedin, username: UmerTariq1)
original_data/data.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21f0f75f8c423e2413348c8961c05fb3236fa5d43bac4d6f3e8d8a5496360f2d
3
+ size 1199473375
original_data/dataset_stats.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Number of rows: 6365808
2
+
3
+ Number of unique Urdu sentences: 1087220
4
+ Number of unique Roman-Urdu sentences: 3999102
5
+
6
+ Number of rows where both column values are the same: 167
7
+ Number of rows where both column values are different: 6365641
8
+
9
+ Number of rows where the Urdu sentence appears only once in the dataset: 90637
10
+ Number of rows where the Roman-Urdu sentence appears only once in the dataset: 3165765
11
+
12
+ Number of rows where the combination occurs "only once" in the whole dataset: 3170561
13
+ Number of unique pairs of Urdu and Roman-Urdu sentences: 4003784
14
+
15
+ Number of sentences which are less than 3 words: 2321
original_data/roman-urdu.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df8a69a6d35115fb932d605329cfd19ed72007eae1dc9ff9448b9fcd5393f428
3
+ size 477550031
original_data/splitting_strategy_rur_to_ur.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Original Issue**:
2
+
3
+ The dataset comprises 6.365 million parallel sentences in Urdu and Roman-Urdu. Many Roman-Urdu sentences are just variations of the same Urdu sentence due to different transliteration styles. If we randomly split this dataset into training, validation, and test sets, there's a high chance that variations of the same Urdu sentence will appear in multiple sets. This overlap can lead to data leakage, causing the model to memorize specific sentence pairs rather than learning to generalize transliteration patterns. Consequently, evaluation metrics like BLEU scores may be artificially inflated, not accurately reflecting the model's true performance on unseen data.
4
+
5
+
6
+ **Splitting Strategy**:
7
+ To address this issue, the dataset is split into training, validation, and test sets in a way that ensures no Urdu sentence (and its variations) appears in more than one set. The strategy involves grouping sentences by unique Urdu text and carefully selecting sentences based on the number of their variations.
8
+
9
+ 1. **Load and Preprocess the Data**
10
+
11
+ Load the Dataset: Read the CSV file containing Urdu and Roman-Urdu sentence pairs into a Pandas DataFrame.
12
+ Remove Missing Entries: Drop any rows where the 'Urdu text' is missing.
13
+ Group by Urdu Sentences: Group the data by 'Urdu text' and aggregate all corresponding 'Roman-Urdu text' variations into lists.
14
+ Count Variations: Add a 'count' column representing the number of Roman-Urdu variations for each Urdu sentence.
15
+
16
+ 2. **Select Unique Sentences for Validation and Test Sets**
17
+
18
+ Validation Set:
19
+ Select 1,000 Urdu sentences that occur only once in the dataset (i.e., sentences with a 'count' of 1).
20
+ Include their corresponding Roman-Urdu text.
21
+ Test Set:
22
+ From the remaining Urdu sentences with a 'count' of 1 (excluding those in the validation set), select another 1,000 sentences.
23
+ Include their corresponding Roman-Urdu text.
24
+
25
+ 3. **Select Replicated Sentences with Variations for Validation and Test Sets**
26
+
27
+ Validation Set:
28
+ Select 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations (i.e., 'count' > 1 and 'count' ≤ 10).
29
+ Include all variations of these Urdu sentences in the validation set.
30
+ Test Set:
31
+ From the remaining Urdu sentences with 2 to 10 variations (excluding those in the validation set), select another 2,000 sentences.
32
+ Include all variations of these Urdu sentences in the test set.
33
+
34
+ 4. **Prepare the Training Set**
35
+
36
+ Exclude Test and Validation Sentences:
37
+ Remove all Urdu sentences (and their variations) present in the test and validation sets from the original dataset.
38
+ Form the Training Set:
39
+ The training set consists of all remaining Urdu sentences and their corresponding Roman-Urdu variations not included in the test or validation sets.
40
+
41
+ 5. **Create Smaller Subsets for Quick Evaluation**
42
+
43
+ Purpose: Facilitate faster testing and validation during model development.
44
+ Validation Subset:
45
+ From the unique Urdu sentences in the validation set, randomly select 1,000 sentences (they only have one variation).
46
+ From the replicated Urdu sentences in the validation set, for each Urdu sentence, randomly select only one Roman-Urdu variation.
47
+ Combine these to form a smaller validation set of 3,000 sentences.
48
+ Test Subset:
49
+ Repeat the same process for the test set to create a smaller test set of 3,000 sentences.
50
+
51
+
52
+ **Key Points**:
53
+ - No Overlap Between Sets: By excluding any Urdu sentences used in the test and validation sets from the training set, the strategy ensures no overlap, preventing data leakage.
54
+
55
+ - Inclusion of All Variations: The large test and validation sets include all variations of selected Urdu sentences to thoroughly evaluate the model's ability to handle different transliterations.
56
+
57
+ - Smaller Subsets for Efficiency: Smaller test and validation sets contain only one variation per Urdu sentence, allowing for quicker evaluations during model development without compromising the integrity of the results.
58
+
59
+ - Random Sampling with Fixed Seed: A fixed random_state (e.g., 42) is used in all random sampling steps to ensure reproducibility of the data splits.
60
+
61
+ - Balanced Evaluation: The strategy includes both unique sentences and those with multiple variations, providing a comprehensive evaluation of the model's performance across different levels of sentence frequency and complexity.
62
+
63
+ - Data Integrity Checks: After splitting, the sizes of the datasets are verified, and checks are performed to confirm that no Urdu sentences are shared between the training, validation, and test sets.
64
+
65
+ - Generalization Focus:By ensuring the model does not see any test or validation sentences during training, the evaluation metrics will accurately reflect the model's ability to generalize to unseen data.
66
+
67
+ - We also tested for checked for if the training sentences are made up entirely of (test sentences or their repetitions) and found that there were no matches. (file: Transliterate/RUP/finetuning/scripts/one_time_usage/filter_uniqueurdu_data.py)
68
+
original_data/urdu.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:820b16cc225e19e63e0aa6a3868bbca6e25211ef5b0ca307c73b7643b18cc202
3
+ size 714048987
scripts/check_stats.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+
3
+ # File path to the input CSV file
4
+ input_file_path = '../original_data/data.csv'
5
+ # Output file path
6
+ output_file_path = 'top_50_urdu.json'
7
+
8
+ # Load the CSV file into a Pandas DataFrame
9
+ df = pd.read_csv(input_file_path, encoding='utf-8')
10
+
11
+
12
+ # count the number of rows
13
+ num_rows = df.shape[0]
14
+ print(f"Number of rows: {num_rows}")
15
+
16
+ # Count the number of rows where both column values are the same
17
+ same_values_count = df[df['Urdu text'] == df['Roman-Urdu text']].shape[0]
18
+ print(f"Number of rows where both column values are the same: {same_values_count}")
19
+
20
+ # count the number of rows where both column values are different
21
+ different_values_count = df[df['Urdu text'] != df['Roman-Urdu text']].shape[0]
22
+ print(f"Number of rows where both column values are different: {different_values_count}")
23
+
24
+ # count the number of unique Urdu sentences
25
+ unique_urdu_sentences = df['Urdu text'].nunique()
26
+ print(f"Number of unique Urdu sentences: {unique_urdu_sentences}")
27
+
28
+ # count the number of unique Roman-Urdu sentences
29
+ unique_roman_urdu_sentences = df['Roman-Urdu text'].nunique()
30
+ print(f"Number of unique Roman-Urdu sentences: {unique_roman_urdu_sentences}")
31
+
32
+ # count the number of rows where the Urdu and Roman-Urdu sentences do not appear anywhere else together
33
+ unique_pairs = df.groupby(['Urdu text', 'Roman-Urdu text']).size().reset_index(name='count')
34
+ unique_pairs = unique_pairs[unique_pairs['count'] == 1].shape[0]
35
+ print(f"Number of rows where the combination occurs only once in the whole dataset: {unique_pairs}")
36
+
37
+ # count the number of unique pairs of Urdu and Roman-Urdu sentences
38
+ unique_pairs = df.drop_duplicates().shape[0]
39
+ print(f"Number of unique pairs of Urdu and Roman-Urdu sentences: {unique_pairs}")
40
+
41
+
42
+ # count the number of rows where the Urdu sentence appears only once in the dataset
43
+ urdu_sentence_counts = df['Urdu text'].value_counts()
44
+ urdu_sentence_counts = urdu_sentence_counts[urdu_sentence_counts == 1].shape[0]
45
+ print(f"Number of rows where the Urdu sentence appears only once in the dataset: {urdu_sentence_counts}")
46
+
47
+ # count the number of rows where the Roman-Urdu sentence appears only once in the dataset
48
+ roman_urdu_sentence_counts = df['Roman-Urdu text'].value_counts()
49
+ roman_urdu_sentence_counts = roman_urdu_sentence_counts[roman_urdu_sentence_counts == 1].shape[0]
50
+ print(f"Number of rows where the Roman-Urdu sentence appears only once in the dataset: {roman_urdu_sentence_counts}")
51
+
52
+
53
+ # count the number of where the urdu sentences appear more than once but less than 11 times in the whole dataset
54
+ urdu_sentence_counts = df['Urdu text'].value_counts()
55
+ urdu_sentence_counts = urdu_sentence_counts[(urdu_sentence_counts > 1) & (urdu_sentence_counts <= 10)].shape[0]
56
+ print(f"Number of rows where the Urdu sentence appears more than once but less than 11 times in the whole dataset: {urdu_sentence_counts}")
scripts/check_substring.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Check if the training sentences are made up entirely of (test sentences or their repetitions)
4
+
5
+ # Observation : Some of the sentences in original dataset comprised of other sentences in the dataset
6
+ # For example, One sentence would be "A B C D" and another sentence would be "A B C D A B C D"
7
+ # This is not a good thing as the model can easily overfit on the training data and
8
+ # if the other sentence thats being repeated is in the test set, the model will perform very well on the test set but not on the real world data
9
+
10
+ import pandas as pd
11
+ import ahocorasick
12
+ from tqdm import tqdm
13
+ import json
14
+
15
+ # File paths
16
+ training_set_path = '../train_set.csv'
17
+ test_set_path = '../small_validation_set.csv'
18
+
19
+ print("Loading the CSV files...")
20
+ # Load the CSV files into Pandas DataFrames
21
+ test_set = pd.read_csv(test_set_path, encoding='utf-8')
22
+ training_set = pd.read_csv(training_set_path, encoding='utf-8')
23
+
24
+ print("CSV files loaded successfully.")
25
+
26
+ # Prepare the test sentences
27
+ test_sentences = test_set['Urdu text'].dropna()
28
+ # Remove extra spaces and standardize
29
+ test_sentences = test_sentences.apply(lambda x: ' '.join(x.strip().split()))
30
+ # Skip sentences with only one word if desired
31
+ test_sentences = test_sentences[test_sentences.str.split().str.len() > 1].unique()
32
+ test_sentences_set = set(test_sentences)
33
+
34
+ print(f"Number of test sentences: {len(test_sentences_set)}")
35
+
36
+ # Build the Aho-Corasick automaton
37
+ print("Building the Aho-Corasick automaton with test sentences...")
38
+ A = ahocorasick.Automaton()
39
+
40
+ for idx, test_sentence in enumerate(test_sentences):
41
+ A.add_word(test_sentence, (idx, test_sentence))
42
+
43
+ A.make_automaton()
44
+ print("Automaton built successfully.")
45
+
46
+ # Initialize matches dictionary
47
+ matches = {}
48
+
49
+ print("Processing training sentences...")
50
+ training_sentences = training_set['Urdu text'].dropna()
51
+ # Remove extra spaces and standardize
52
+ training_sentences = training_sentences.apply(lambda x: ' '.join(x.strip().split()))
53
+ training_sentences = training_sentences.unique()
54
+
55
+ for training_sentence in tqdm(training_sentences):
56
+ s = training_sentence
57
+ s_length = len(s)
58
+ matches_in_s = []
59
+ for end_index, (insert_order, test_sentence) in A.iter(s):
60
+ start_index = end_index - len(test_sentence) + 1
61
+ matches_in_s.append((start_index, end_index, test_sentence))
62
+ if not matches_in_s:
63
+ continue
64
+ # Sort matches by start_index
65
+ matches_in_s.sort(key=lambda x: x[0])
66
+ # Now check if matches cover the entire training sentence without gaps
67
+ # And all matches are of the same test sentence
68
+ covers_entire_sentence = True
69
+ current_index = 0
70
+ first_test_sentence = matches_in_s[0][2]
71
+ all_same_test_sentence = True
72
+ for start_index, end_index, test_sentence in matches_in_s:
73
+ if start_index != current_index:
74
+ covers_entire_sentence = False
75
+ break
76
+ if test_sentence != first_test_sentence:
77
+ all_same_test_sentence = False
78
+ break
79
+ current_index = end_index + 1
80
+ if covers_entire_sentence and current_index == s_length and all_same_test_sentence:
81
+ # Training sentence is made up entirely of repetitions of test_sentence
82
+ if test_sentence not in matches:
83
+ matches[test_sentence] = []
84
+ matches[test_sentence].append(s)
85
+ print("Processing completed.")
86
+ print("Number of matches: ", sum(len(v) for v in matches.values()))
87
+
88
+ # Optionally, save matches to a JSON file
89
+ output_file_path = '/netscratch/butt/Transliterate/RUP/finetuning/scripts/one_time_usage/test_training_matches.json'
90
+ with open(output_file_path, 'w', encoding='utf-8') as json_file:
91
+ json.dump(matches, json_file, ensure_ascii=False, indent=4)
92
+
93
+ print(f"Matches have been written to {output_file_path}")
scripts/splitting_rup_data.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # bash /home/butt/run_docker_cpu.sh python splitting_rup_data.py
2
+
3
+ import pandas as pd
4
+ import numpy as np
5
+
6
+ # File path to the input CSV file
7
+ input_file_path = '../original_data/data.csv'
8
+ # Output file paths for training, test, and validation sets
9
+ base_path = "."
10
+
11
+ train_output_path = base_path + 'train_set.csv'
12
+ test_output_path = base_path + 'test_set.csv'
13
+ validation_output_path = base_path + 'validation_set.csv'
14
+ small_test_output_path = base_path + 'small_test_set.csv'
15
+ small_validation_output_path = base_path + 'small_validation_set.csv'
16
+
17
+ NUMBER_OF_UNIQUE_SENTENCES = 1500
18
+ NUMBER_OF_REPLICATED_SENTENCES = 3000
19
+ REPLICATION_RATE = 10
20
+
21
+ # Load the CSV file into a Pandas DataFrame
22
+ df = pd.read_csv(input_file_path, encoding='utf-8')
23
+
24
+ # Drop rows where 'Urdu text' is NaN
25
+ df = df.dropna(subset=['Urdu text'])
26
+
27
+ # Group by 'Urdu text' and aggregate the corresponding 'Roman-Urdu text'
28
+ grouped = df.groupby('Urdu text')['Roman-Urdu text'].apply(list).reset_index()
29
+
30
+ # Add a 'count' column to store the number of occurrences
31
+ grouped['count'] = grouped['Roman-Urdu text'].apply(len)
32
+
33
+ # Select NUMBER_OF_UNIQUE_SENTENCES least occurring groupbys (unique sentences without replication in the dataset) for validation
34
+ unique_sentences_val = grouped[grouped['count'] == 1].sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
35
+ unique_sentences_val = unique_sentences_val.explode('Roman-Urdu text') # Convert list to individual rows
36
+
37
+ # select NUMBER_OF_UNIQUE_SENTENCES least occuring groupbys for test but they should not be in validation set (unique_sentences_val)
38
+ unique_sentences_test = grouped[grouped['count'] == 1]
39
+ unique_sentences_test = unique_sentences_test[~unique_sentences_test['Urdu text'].isin(unique_sentences_val['Urdu text'])]
40
+ unique_sentences_test = unique_sentences_test.sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
41
+ unique_sentences_test = unique_sentences_test.explode('Roman-Urdu text') # Convert list to individual rows
42
+
43
+ # variables replicated are for whole test/val sets and one_replicated are for small test/val sets
44
+
45
+ # Select NUMBER_OF_REPLICATED_SENTENCES groupbys from sentences that appear less than or equal to REPLICATION_RATE times
46
+ replicated_sentences_val = grouped[(grouped['count'] <= REPLICATION_RATE) & (grouped['count'] > 1)].sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
47
+
48
+ # do the same but for test set and they should not be in validation set
49
+ replicated_sentences_test = grouped[(grouped['count'] <= REPLICATION_RATE) & (grouped['count'] > 1)]
50
+ replicated_sentences_test = replicated_sentences_test[~replicated_sentences_test['Urdu text'].isin(replicated_sentences_val['Urdu text'])]
51
+ replicated_sentences_test = replicated_sentences_test.sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
52
+
53
+ # select any 1 sentence from each group of the replicated sentences
54
+ one_replicated_sentences_val = replicated_sentences_val.groupby('Urdu text').apply(lambda x: x.sample(1, random_state=42)).reset_index(drop=True)
55
+ # do the same but for test set
56
+ one_replicated_sentences_test = replicated_sentences_test.groupby('Urdu text').apply(lambda x: x.sample(1, random_state=42)).reset_index(drop=True)
57
+
58
+ # explode both the replicated and one_replicated
59
+ replicated_sentences_val = replicated_sentences_val.explode('Roman-Urdu text')
60
+ one_replicated_sentences_val = one_replicated_sentences_val.explode('Roman-Urdu text')
61
+
62
+ replicated_sentences_test = replicated_sentences_test.explode('Roman-Urdu text')
63
+ one_replicated_sentences_test = one_replicated_sentences_test.explode('Roman-Urdu text')
64
+
65
+ # Prepare the test and validation sets
66
+ test_set = pd.concat([unique_sentences_test, replicated_sentences_test]).reset_index(drop=True)
67
+ validation_set = pd.concat([unique_sentences_val, replicated_sentences_val]).reset_index(drop=True)
68
+
69
+ # create smaller test and validation sets
70
+ # subset NUMBER_OF_UNIQUE_SENTENCES from unique test
71
+ small_unique_sentences_test = unique_sentences_test.sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
72
+ # subset NUMBER_OF_UNIQUE_SENTENCES from unique validation
73
+ small_unique_sentences_val = unique_sentences_val.sample(n=NUMBER_OF_UNIQUE_SENTENCES, random_state=42)
74
+
75
+ # subset NUMBER_OF_REPLICATED_SENTENCES from replicated test
76
+ small_replicated_sentences_test = replicated_sentences_test.sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
77
+ # subset NUMBER_OF_REPLICATED_SENTENCES from replicated validation
78
+ small_replicated_sentences_val = replicated_sentences_val.sample(n=NUMBER_OF_REPLICATED_SENTENCES, random_state=42)
79
+
80
+
81
+ # explode all the small sets
82
+ small_unique_sentences_test = small_unique_sentences_test.explode('Roman-Urdu text')
83
+ small_unique_sentences_val = small_unique_sentences_val.explode('Roman-Urdu text')
84
+ small_replicated_sentences_test = small_replicated_sentences_test.explode('Roman-Urdu text')
85
+ small_replicated_sentences_val = small_replicated_sentences_val.explode('Roman-Urdu text')
86
+
87
+ # combine the small sets
88
+ small_test_set = pd.concat([small_unique_sentences_test, small_replicated_sentences_test]).reset_index(drop=True)
89
+ small_validation_set = pd.concat([small_unique_sentences_val, small_replicated_sentences_val]).reset_index(drop=True)
90
+
91
+ # Prepare the training set by excluding the test and validation sets from the original DataFrame
92
+ # training set should be the whole data except fpr test_set and validation_set
93
+ training_set = df[~df['Urdu text'].isin(test_set['Urdu text']) & ~df['Urdu text'].isin(validation_set['Urdu text'])]
94
+
95
+
96
+ # Save only 'Urdu text' and 'Roman-Urdu text' columns to CSV files
97
+ training_set[['Urdu text', 'Roman-Urdu text']].to_csv(train_output_path, index=False, encoding='utf-8')
98
+ test_set[['Urdu text', 'Roman-Urdu text']].to_csv(test_output_path, index=False, encoding='utf-8')
99
+ validation_set[['Urdu text', 'Roman-Urdu text']].to_csv(validation_output_path, index=False, encoding='utf-8')
100
+ small_test_set[['Urdu text', 'Roman-Urdu text']].to_csv(small_test_output_path, index=False, encoding='utf-8')
101
+ small_validation_set[['Urdu text', 'Roman-Urdu text']].to_csv(small_validation_output_path, index=False, encoding='utf-8')
102
+
103
+ print(f"Training, test, validation, and smaller subsets have been saved to respective CSV files.")
104
+ # Print the number of rows in each file
105
+ print(f"Number of rows in training set: {len(training_set)}")
106
+ print(f"Number of rows in test set: {len(test_set)}")
107
+ print(f"Number of rows in validation set: {len(validation_set)}")
108
+ print(f"Number of rows in small test set: {len(small_test_set)}")
109
+ print(f"Number of rows in small validation set: {len(small_validation_set)}")
small_test_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dd736f3b25909305f156ee354998803fc6b555da86846496e195be6296a39f2
3
+ size 446913
small_validation_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0dc61cac2f0f30b5556a99563aa19b2398558f65e371baca701fe8a4c44ec8e
3
+ size 441397
splitting_strategy_rur_to_ur.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Original Issue**:
2
+
3
+ The dataset comprises 6.365 million parallel sentences in Urdu and Roman-Urdu. Many Roman-Urdu sentences are just variations of the same Urdu sentence due to different transliteration styles. If we randomly split this dataset into training, validation, and test sets, there's a high chance that variations of the same Urdu sentence will appear in multiple sets. This overlap can lead to data leakage, causing the model to memorize specific sentence pairs rather than learning to generalize transliteration patterns. Consequently, evaluation metrics like BLEU scores may be artificially inflated, not accurately reflecting the model's true performance on unseen data.
4
+
5
+
6
+ **Splitting Strategy**:
7
+ To address this issue, the dataset is split into training, validation, and test sets in a way that ensures no Urdu sentence (and its variations) appears in more than one set. The strategy involves grouping sentences by unique Urdu text and carefully selecting sentences based on the number of their variations.
8
+
9
+ 1. **Load and Preprocess the Data**
10
+
11
+ Load the Dataset: Read the CSV file containing Urdu and Roman-Urdu sentence pairs into a Pandas DataFrame.
12
+ Remove Missing Entries: Drop any rows where the 'Urdu text' is missing.
13
+ Group by Urdu Sentences: Group the data by 'Urdu text' and aggregate all corresponding 'Roman-Urdu text' variations into lists.
14
+ Count Variations: Add a 'count' column representing the number of Roman-Urdu variations for each Urdu sentence.
15
+
16
+ 2. **Select Unique Sentences for Validation and Test Sets**
17
+
18
+ Validation Set:
19
+ Select 1,000 Urdu sentences that occur only once in the dataset (i.e., sentences with a 'count' of 1).
20
+ Include their corresponding Roman-Urdu text.
21
+ Test Set:
22
+ From the remaining Urdu sentences with a 'count' of 1 (excluding those in the validation set), select another 1,000 sentences.
23
+ Include their corresponding Roman-Urdu text.
24
+
25
+ 3. **Select Replicated Sentences with Variations for Validation and Test Sets**
26
+
27
+ Validation Set:
28
+ Select 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations (i.e., 'count' > 1 and 'count' ≤ 10).
29
+ Include all variations of these Urdu sentences in the validation set.
30
+ Test Set:
31
+ From the remaining Urdu sentences with 2 to 10 variations (excluding those in the validation set), select another 2,000 sentences.
32
+ Include all variations of these Urdu sentences in the test set.
33
+
34
+ 4. **Prepare the Training Set**
35
+
36
+ Exclude Test and Validation Sentences:
37
+ Remove all Urdu sentences (and their variations) present in the test and validation sets from the original dataset.
38
+ Form the Training Set:
39
+ The training set consists of all remaining Urdu sentences and their corresponding Roman-Urdu variations not included in the test or validation sets.
40
+
41
+ 5. **Create Smaller Subsets for Quick Evaluation**
42
+
43
+ Purpose: Facilitate faster testing and validation during model development.
44
+ Validation Subset:
45
+ From the unique Urdu sentences in the validation set, randomly select 1,000 sentences (they only have one variation).
46
+ From the replicated Urdu sentences in the validation set, for each Urdu sentence, randomly select only one Roman-Urdu variation.
47
+ Combine these to form a smaller validation set of 3,000 sentences.
48
+ Test Subset:
49
+ Repeat the same process for the test set to create a smaller test set of 3,000 sentences.
50
+
51
+
52
+ **Key Points**:
53
+ - No Overlap Between Sets: By excluding any Urdu sentences used in the test and validation sets from the training set, the strategy ensures no overlap, preventing data leakage.
54
+
55
+ - Inclusion of All Variations: The large test and validation sets include all variations of selected Urdu sentences to thoroughly evaluate the model's ability to handle different transliterations.
56
+
57
+ - Smaller Subsets for Efficiency: Smaller test and validation sets contain only one variation per Urdu sentence, allowing for quicker evaluations during model development without compromising the integrity of the results.
58
+
59
+ - Random Sampling with Fixed Seed: A fixed random_state (e.g., 42) is used in all random sampling steps to ensure reproducibility of the data splits.
60
+
61
+ - Balanced Evaluation: The strategy includes both unique sentences and those with multiple variations, providing a comprehensive evaluation of the model's performance across different levels of sentence frequency and complexity.
62
+
63
+ - Data Integrity Checks: After splitting, the sizes of the datasets are verified, and checks are performed to confirm that no Urdu sentences are shared between the training, validation, and test sets.
64
+
65
+ - Generalization Focus:By ensuring the model does not see any test or validation sentences during training, the evaluation metrics will accurately reflect the model's ability to generalize to unseen data.
66
+
67
+ - We also tested for checked for if the training sentences are made up entirely of (test sentences or their repetitions) and found that there were no matches. (file: Transliterate/RUP/finetuning/scripts/one_time_usage/filter_uniqueurdu_data.py)
68
+
test_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8c39f7b6e364abb6f7a0ef802a5e99794083ba8ad69e713888df471b523155b
3
+ size 1846199
train_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e35eef8de7871c70fe54b2f605bfdd64df26244fe40f4d0dc638f8cbd115f00b
3
+ size 1189435308
validation_set.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa4ab3c419da5b51a77caa2d1ed4779618f62440cf804fb70e0c4bc8b8957ef9
3
+ size 1825822