Datasets:
Tasks:
Feature Extraction
Modalities:
Text
Formats:
parquet
Size:
1M - 10M
Tags:
sentence-transformers
License:
| language: | |
| - en | |
| - it | |
| - de | |
| license: cc-by-nc-4.0 | |
| dataset_info: | |
| features: | |
| - name: query | |
| dtype: string | |
| - name: pos | |
| dtype: string | |
| - name: neg | |
| dtype: string | |
| - name: query_lang | |
| dtype: string | |
| - name: __index_level_0__ | |
| dtype: int64 | |
| splits: | |
| - name: train | |
| num_bytes: 5340253796 | |
| num_examples: 7288056 | |
| download_size: 2279432455 | |
| dataset_size: 5340253796 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| task_categories: | |
| - feature-extraction | |
| pretty_name: Nomic Triplets | |
| size_categories: | |
| - 1M<n<10M | |
| tags: | |
| - sentence-transformers | |
| Dataset built from [Nomic Contrastors](https://github.com/nomic-ai/contrastors) for training embedding models. Some (query, pos) pairs are repeated. All (query, pos, neg) triplets are unique. | |
| The `query_lang` attribute was calculated using [fasttext language identification](https://huggingface.co/facebook/fasttext-language-identification) |