| | --- |
| | language: |
| | - en |
| | - multilingual |
| | size_categories: |
| | - 10M<n<100M |
| | task_categories: |
| | - feature-extraction |
| | - sentence-similarity |
| | pretty_name: WikiTitles |
| | tags: |
| | - sentence-transformers |
| | dataset_info: |
| | features: |
| | - name: english |
| | dtype: string |
| | - name: non_english |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 755332378 |
| | num_examples: 14700458 |
| | download_size: 685053033 |
| | dataset_size: 755332378 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | --- |
| | |
| | # Dataset Card for Parallel Sentences - WikiTitles |
| |
|
| | This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. Most of the sentences originate from the [OPUS website](https://opus.nlpl.eu/). |
| | In particular, this dataset contains the [WikiTitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences) dataset. |
| |
|
| | ## Related Datasets |
| |
|
| | The following datasets are also a part of the Parallel Sentences collection: |
| | * [parallel-sentences-europarl](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-europarl) |
| | * [parallel-sentences-global-voices](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-global-voices) |
| | * [parallel-sentences-muse](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-muse) |
| | * [parallel-sentences-jw300](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-jw300) |
| | * [parallel-sentences-news-commentary](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-news-commentary) |
| | * [parallel-sentences-opensubtitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-opensubtitles) |
| | * [parallel-sentences-talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks) |
| | * [parallel-sentences-tatoeba](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-tatoeba) |
| | * [parallel-sentences-wikimatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikimatrix) |
| | * [parallel-sentences-wikititles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikititles) |
| | * [parallel-sentences-ccmatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-ccmatrix) |
| |
|
| | These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html). |
| |
|
| | ## Dataset Stats |
| |
|
| | * Columns: "english", "non_english" |
| | * Column types: `str`, `str` |
| | * Examples: |
| | ```python |
| | { |
| | "english": "Hossain Toufique Imam", |
| | "non_english": "হোসেন তৌফিক ইমাম" |
| | } |
| | ``` |
| | * Collection strategy: Processing the raw data from [parallel-sentences](https://huggingface.co/datasets/sentence-transformers/parallel-sentences) and formatting it in Parquet. |
| | * Deduplified: No |