Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -70,15 +70,10 @@ The dataset contains a single split: `train`.
|
|
| 70 |
|
| 71 |
### Source Data
|
| 72 |
|
| 73 |
-
The dataset is a combination of the following authentic datasets:
|
| 74 |
-
|
| 75 |
-
| Dataset | Sentences |
|
| 76 |
-
|:-------|-------:|
|
| 77 |
-
|
| 78 |
The Catalan-Chinese data collected from the web was a combination of the following datasets:
|
| 79 |
|
| 80 |
| Dataset | Sentences before cleaning |
|
| 81 |
-
|
| 82 |
| WikiMatrix | 90.643 |
|
| 83 |
| XLENT | 535.803 |
|
| 84 |
| GNOME | 78|
|
|
@@ -90,7 +85,7 @@ The Catalan-Chinese data collected from the web was a combination of the followi
|
|
| 90 |
The 6.658.607 sentence pairs of synthetic parallel data were created from the following Spanish-Chinese datasets:
|
| 91 |
|
| 92 |
| Dataset | Sentences before cleaning |
|
| 93 |
-
|
| 94 |
| UNPC |17.599.223|
|
| 95 |
| CCMatrix | 24.051.233 |
|
| 96 |
| MultiParacrawl| 3410087|
|
|
@@ -98,10 +93,11 @@ The 6.658.607 sentence pairs of synthetic parallel data were created from the fo
|
|
| 98 |
|
| 99 |
### Data preparation
|
| 100 |
|
| 101 |
-
The Chinese side of all datasets are passed through the [fastlangid](https://github.com/currentslab/fastlangid) language detector and any sentences which are not
|
| 102 |
-
as simplified Chinese are discarded.
|
|
|
|
| 103 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
| 104 |
-
The filtered datasets are then concatenated to form a final corpus of 6.833.114 parallel sentences.
|
| 105 |
|
| 106 |
### Personal and Sensitive Information
|
| 107 |
|
|
|
|
| 70 |
|
| 71 |
### Source Data
|
| 72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
The Catalan-Chinese data collected from the web was a combination of the following datasets:
|
| 74 |
|
| 75 |
| Dataset | Sentences before cleaning |
|
| 76 |
+
|:------------------|---------------:|
|
| 77 |
| WikiMatrix | 90.643 |
|
| 78 |
| XLENT | 535.803 |
|
| 79 |
| GNOME | 78|
|
|
|
|
| 85 |
The 6.658.607 sentence pairs of synthetic parallel data were created from the following Spanish-Chinese datasets:
|
| 86 |
|
| 87 |
| Dataset | Sentences before cleaning |
|
| 88 |
+
|:------------------|---------------:|
|
| 89 |
| UNPC |17.599.223|
|
| 90 |
| CCMatrix | 24.051.233 |
|
| 91 |
| MultiParacrawl| 3410087|
|
|
|
|
| 93 |
|
| 94 |
### Data preparation
|
| 95 |
|
| 96 |
+
The Chinese side of all datasets are passed through the [fastlangid](https://github.com/currentslab/fastlangid) language detector and any sentences which are not
|
| 97 |
+
identified as simplified Chinese are discarded.
|
| 98 |
+
The datasets are then also deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
| 99 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
| 100 |
+
The filtered datasets are then concatenated to form a final corpus of **6.833.114** parallel sentences.
|
| 101 |
|
| 102 |
### Personal and Sensitive Information
|
| 103 |
|