nmmursit's picture
Update README.md
9aa65a0 verified
metadata
license: apache-2.0
task_categories:
  - sentence-similarity
language:
  - tr
tags:
  - dataset
  - sentence-similarity
  - tr
  - turkish
  - ms-marco
  - contrastive-learning
size_categories:
  - 1K<n<10K

MS MARCO Turkish Triplets Dataset

Dataset Description

Turkish version of the original MS MARCO dataset. This dataset contains query-passage triplets prepared specifically for contrastive learning tasks in Turkish language. This dataset is formatted from parsak/msmarco-tr for triplet-based contrastive learning.

Dataset Source

Source: This dataset is formatted from parsak/msmarco-tr, which is based on the original MS MARCO dataset.

Dataset Structure

Data Fields

  • query_text: Query text in Turkish
  • pos_text: Positive passage text in Turkish
  • neg_text: Negative passage text in Turkish

Data Splits

This dataset contains the following splits:

  • train: Training data

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("newmindai/ms-marco-turkish-triplets")

# Access training data
train_data = dataset['train']

# Example usage
for example in train_data:
    query = example['query_text']
    positive = example['pos_text']
    negative = example['neg_text']
    print(f"Query: {query}")
    print(f"Positive: {positive}")
    print(f"Negative: {negative}")
    break

Recommended Loss Functions

This dataset is optimized for the following loss functions:

  • MultipleNegativesRankingLoss
  • CachedMultipleNegativesRankingLoss
  • TripletLoss

Loss Function Details

MultipleNegativesRankingLoss (MNR)

Purpose: Bring similar examples closer while pushing different examples apart. Used when you only have positive pairs (anchor, positive) and want to derive negatives from within the batch.

Mathematical Formula:

L = - (1/N) * Σ log(exp(s(ai, pi) * scale) / Σ exp(s(ai, pj) * scale))

Where:

  • s(ai, pj): similarity function (e.g., cosine similarity)
  • scale: temperature coefficient
  • N: batch size

Logic: Maximizes the probability of the correct positive example for each anchor. Other positives in the batch act as negatives ("in-batch negatives"). Larger batches provide more negative examples, leading to better separation.

CachedMultipleNegativesRankingLoss

Purpose: Same mathematical principle as MNR but with higher memory efficiency.

Key Difference: Used when large batches cannot fit directly into GPU memory. Pre-caches embeddings and then calculates losses, allowing for "virtually" larger in-batch negatives.

Formula: Same as MNR, only the computation method differs (mini-batch caching).

TripletLoss

Purpose: Bring anchor-positive pairs closer while pushing anchor-negative pairs apart by a specific margin.

Mathematical Formula:

L = max(0, d(a, p) - d(a, n) + m)

Where:

  • d(·, ·): distance function (e.g., Euclidean, 1 - cosine)
  • m: margin (safety interval)

Logic: If the negative is already far enough → loss = 0. If the negative is too close to the positive → loss > 0 (penalty).

Usage Example with Sentence Transformers

from sentence_transformers import SentenceTransformer, losses, InputExample
from datasets import load_dataset

# Load dataset
dataset = load_dataset("newmindai/ms-marco-turkish-triplets")

# Initialize model
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')

# Prepare training examples
train_examples = []
for example in dataset['train']:
    train_examples.append(InputExample(texts=[example['query_text'], example['pos_text']], label=1))
    train_examples.append(InputExample(texts=[example['query_text'], example['neg_text']], label=0))

# Initialize loss function
train_loss = losses.MultipleNegativesRankingLoss(model)

# Train the model
model.fit(
    train_objectives=[(train_examples, train_loss)],
    epochs=1,
    warmup_steps=100
)

Usage Example with TripletLoss

from sentence_transformers import SentenceTransformer, losses, InputExample

# Prepare triplet examples
train_examples = []
for example in dataset['train']:
    train_examples.append(InputExample(
        texts=[example['query_text'], example['pos_text'], example['neg_text']]
    ))

# Initialize triplet loss
train_loss = losses.TripletLoss(model)

# Train the model
model.fit(
    train_objectives=[(train_examples, train_loss)],
    epochs=1,
    warmup_steps=100
)

Dataset Statistics

  • Language: tr (Turkish)
  • Task: sentence-similarity
  • Source: parsak/msmarco-tr (based on original MS MARCO dataset)
  • Format: Query-Passage Triplets
  • Use Case: Contrastive Learning, Sentence Embeddings

Performance Tips

  1. Batch Size: Use batch sizes between 16-32 for optimal performance
  2. Learning Rate: Start with 2e-5 and adjust based on validation performance
  3. Epochs: 1-3 epochs are usually sufficient for fine-tuning
  4. Warmup: Use 10% warmup steps for stable training

Citation

@article{msmarco2016,
  title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
  author={Bajaj, Payal and Campos, Daniel and Craswell, Nick and Deng, Li and Gao, Jianfeng and Liu, Xiaodong and Majumder, Rangan and McNamara, Andrew and Mitra, Bhaskar and Nguyen, Tri and Rosenberg, Mir and Song, Xia and Stoica, Alina and Tiwary, Saurabh and Wang, Tong},
  journal={arXiv preprint arXiv:1611.09268},
  year={2018},
  url={https://arxiv.org/abs/1611.09268},
  doi={10.48550/arXiv.1611.09268}
}

License

This dataset is released under the Apache 2.0 License.

Contact

For questions: info@newmind.ai