|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
- translation |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
## 🧠 Dataset Card: Embedder — Multilingual Triplet Embedding Dataset |
|
|
|
|
|
### 📌 Overview |
|
|
**Embedder** is a multilingual triplet dataset designed for training and evaluating sentence embedding models using contrastive or triplet loss. It contains 1m examples across 11 Indic languages and English + 100extra langs, derived from the Samanantar parallel corpus and opus 100. Each example is structured as a triplet: `(anchor, positive, negative)`. |
|
|
|
|
|
This dataset is ideal for building cross-lingual retrieval systems, semantic search engines, and multilingual embedding models. |
|
|
|
|
|
--- |
|
|
|
|
|
### 🏗️ Construction Details |
|
|
|
|
|
The dataset was built using the following pipeline: |
|
|
|
|
|
- **Source**: [AI4Bharat Samanantar](https://huggingface.co/datasets/ai4bharat/samanantar) — a high-quality parallel corpus for 11 Indic languages ↔ English. |
|
|
- **Step 1: Sampling** |
|
|
Randomly sampled bilingual sentence pairs from Samanantar, ensuring diverse language coverage and semantic alignment. |
|
|
- **Step 2: Triplet Formation** |
|
|
- `anchor`: One sentence from the bilingual pair (randomly chosen to be either English or Indic). |
|
|
- `positive`: The aligned translation from the pair. |
|
|
- `negative`: A randomly sampled sentence from the same language as the anchor, but semantically unrelated. |
|
|
- **Step 3: Column Renaming & Structuring** |
|
|
- Original columns like `sentence_en` and `sentence_hi` were renamed to `anchor` and `positive` based on directionality. |
|
|
- Negative samples were injected from a shuffled pool and assigned to the `negative` column. |
|
|
- **Step 4: Directionality Randomization** |
|
|
To avoid bias, each triplet randomly flips between Indic→English and English→Indic. |
|
|
|
|
|
--- |
|
|
|
|
|
### 📦 Dataset Format |
|
|
|
|
|
- File type: `.jsonl` |
|
|
- Each line contains: |
|
|
```json |
|
|
{ |
|
|
"anchor": "मैं स्कूल जा रहा हूँ।", |
|
|
"positive": "I am going to school.", |
|
|
"negative": "The weather is nice today." |
|
|
} |
|
|
``` |
|
|
|
|
|
- Total examples: 60,000 |
|
|
- Languages: Hindi, Bengali, Tamil, Marathi, Gujarati, Punjabi, Kannada, Malayalam, Oriya, Assamese, Telugu, English |
|
|
|
|
|
--- |
|
|
|
|
|
### 🎯 Intended Use |
|
|
|
|
|
- Fine-tuning multilingual embedding models (e.g., Gemma, BGE, LaBSE) |
|
|
- Training contrastive or triplet loss models |
|
|
- Cross-lingual semantic retrieval |
|
|
- Evaluation of embedding alignment across languages |
|
|
|
|
|
--- |
|
|
|
|
|
### 🧪 Supported Tasks |
|
|
|
|
|
| Task | Description | |
|
|
|--------------------------|--------------------------------------------------| |
|
|
| Sentence Embedding | Learn language-agnostic representations | |
|
|
| Semantic Similarity | Evaluate cosine similarity between anchor/positive | |
|
|
| Cross-lingual Retrieval | Retrieve aligned sentences across languages | |
|
|
| Contrastive Learning | Train models to distinguish semantic similarity | |
|
|
|
|
|
--- |
|
|
|
|
|
### ⚖️ Language Balance |
|
|
|
|
|
Each language contributes ~5,454 triplets, ensuring balanced representation. Directionality is randomized to prevent source-target bias. |
|
|
|
|
|
--- |
|
|
|
|
|
### 🔐 License |
|
|
|
|
|
- License: CC-BY 4.0 (inherits from Samanantar) |
|
|
- Free for academic, commercial, and open-source use |
|
|
- Attribution required |
|
|
|
|
|
--- |
|
|
|
|
|
### 🛠 Preprocessing Tips |
|
|
|
|
|
- Tokenize using model-specific tokenizer (e.g., GemmaTokenizer) |
|
|
- Truncate or chunk long sequences to fit model limits |
|
|
- Optional: Add language tags for anchor/positive/negative for analysis |
|
|
|
|
|
--- |
|
|
|
|
|
### 📈 Evaluation Metrics |
|
|
|
|
|
- Cosine similarity |
|
|
- Mean Reciprocal Rank (MRR) |
|
|
- nDCG |
|
|
- Retrieval accuracy |
|
|
|
|
|
--- |
|
|
|
|
|
### 👤 Maintainer |
|
|
|
|
|
- **Author**: Parvesh Rawal (XenArcAI) |
|
|
|
|
|
--- |