Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -70,6 +70,8 @@ The dataset contains a single split: `train`.
|
|
| 70 |
|
| 71 |
## Dataset Creation
|
| 72 |
|
|
|
|
|
|
|
| 73 |
### Source Data
|
| 74 |
|
| 75 |
### Personal and Sensitive Information
|
|
|
|
| 70 |
|
| 71 |
## Dataset Creation
|
| 72 |
|
| 73 |
+
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 6.159.631 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
|
| 74 |
+
|
| 75 |
### Source Data
|
| 76 |
|
| 77 |
### Personal and Sensitive Information
|