fine_tuned_model_02 / README.md
ritesh-07's picture
Add new SentenceTransformer model
4e482af verified
---
language:
- nep
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1046
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: jangedoo/all-MiniLM-L6-v2-nepali
widget:
- source_sentence: राहदानीको लागि कागजात सत्यापनमा कस्तो मनोनयनपत्र चाहिन्छ?
sentences:
- सिम्यान्स अभिलेख किताबको लागि निवेदन फाराम अनुसूची-२क बमोजिमको ढाँचामा आधारित
हुन्छ।
- कुटनीतिक वा विशेष राहदानीको लागि कागजात सत्यापनमा सम्बन्धित पदमा नियुक्तिको मनोनयनपत्रको
प्रमाणित प्रतिलिपि चाहिन्छ।
- राहदानी रद्द गर्न महानिर्देशकले स्वीकृति दिन्छ।
- source_sentence: राहदानी वितरणमा त्रुटि सच्याउन कति समय लाग्छ?
sentences:
- राहदानी नियमावली, २०७७ मा अभिलेखको गोपनीयताको उल्लङ्घनको जाँचको नतिजाको अपीलको
नतिजाको कार्यान्वयनको अभिलेख बाह्र वर्षसम्म राखिन्छ।
- राहदानी वितरणमा त्रुटि सच्याउन सामान्यतः सात कार्यदिन लाग्छ, तर प्रक्रिया जटिल
भएमा बढी समय लाग्न सक्छ।
- राहदानीको लागि निवेदनमा जाँच गर्ने अधिकारीको नाम, सही, पद, मिति उल्लेख गर्नुपर्छ।
- source_sentence: राहदानीको लागि निवेदनमा कस्तो आवेदन स्रोत उल्लेख गर्नुपर्छ?
sentences:
- राहदानीको लागि निवेदनमा आवेदन स्रोत (विभाग, जिल्ला, वा नियोग) उल्लेख गर्नुपर्छ।
- राहदानी बुझाउने प्रक्रियामा त्रुटि सच्याउन सामान्यतः सात कार्यदिन लाग्छ, तर प्रक्रिया
जटिल भएमा बढी समय लाग्न सक्छ।
- राहदानीको लिए अनलाइन निवेदनमा निकटतम व्यक्तिसँगको सम्बन्ध (Relationship) उल्लेख
गर्नुपर्छ।
- source_sentence: विशेष राहदानी कसलाई जारी गरिन्छ?
sentences:
- राहदानी रद्द गर्न बाहक वा सम्बन्धित निकायको लिखित निवेदन चाहिन्छ।
- राहदानी नियमावली, २०७७ मा अभिलेखको गोपनीयताको उल्लङ्घनको जाँचको नतिजाको अपीलको
लागि जाँच गर्ने अधिकारीको नाम, सही, पद, मिति उल्लेख गर्नुपर्छ।
- विशेष राहदानी नगरपालिकाका प्रमुख, सहसचिव, जिल्ला न्यायाधीश, प्रदेश लोकसेवा आयोगका
सदस्य, लगायतका पदाधिकारीलाई जारी गरिन्छ।
- source_sentence: कुटनीतिक राहदानीको लागि निवेदनमा कस्तो ठेगाना विवरण चाहिन्छ?
sentences:
- कुटनीतिक राहदानीको लागि निवेदनमा जिल्ला, गाउँ/नगरपालिका, वडा नम्बर, गाउँ/सडक,
घर नम्बरको ठेगाना विवरण चाहिन्छ।
- राहदानीको लागि कागजात धुल्याउने प्रक्रिया महानिर्देशकको स्वीकृतिमा हुन्छ।
- राहदानीको विद्युतीय अभिलेख अनुसूची-७ बमोजिमको ढाँचामा आधारित हुन्छ।
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: sentenceTransformer_nepali_embedding
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 384
type: dim_384
metrics:
- type: cosine_accuracy@1
value: 0.41025641025641024
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6581196581196581
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7350427350427351
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8461538461538461
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.41025641025641024
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21937321937321935
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14700854700854699
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0846153846153846
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.41025641025641024
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6581196581196581
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7350427350427351
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8461538461538461
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6218282635615644
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5504409171075837
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5571750406212126
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.42735042735042733
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6410256410256411
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.717948717948718
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8290598290598291
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42735042735042733
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21367521367521364
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14358974358974358
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08290598290598289
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.42735042735042733
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6410256410256411
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.717948717948718
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8290598290598291
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6159996592171239
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5487959571292905
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5563599760664051
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.39316239316239315
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5811965811965812
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6752136752136753
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8034188034188035
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.39316239316239315
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19373219373219372
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.135042735042735
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08034188034188033
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.39316239316239315
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5811965811965812
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6752136752136753
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8034188034188035
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5799237272193319
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5100054266720935
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5176470843483384
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.38461538461538464
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5811965811965812
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6410256410256411
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7606837606837606
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38461538461538464
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1937321937321937
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12820512820512817
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07606837606837605
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38461538461538464
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5811965811965812
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6410256410256411
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7606837606837606
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.565217766093051
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5036663953330621
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5140223584530523
name: Cosine Map@100
---
# sentenceTransformer_nepali_embedding
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jangedoo/all-MiniLM-L6-v2-nepali](https://huggingface.co/jangedoo/all-MiniLM-L6-v2-nepali) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [jangedoo/all-MiniLM-L6-v2-nepali](https://huggingface.co/jangedoo/all-MiniLM-L6-v2-nepali) <!-- at revision 418f7cf08ecbbc2ff0e8460bb6eb6457291102df -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** nep
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ritesh-07/fine_tuned_model_02")
# Run inference
sentences = [
'कुटनीतिक राहदानीको लागि निवेदनमा कस्तो ठेगाना विवरण चाहिन्छ?',
'कुटनीतिक राहदानीको लागि निवेदनमा जिल्ला, गाउँ/नगरपालिका, वडा नम्बर, गाउँ/सडक, र घर नम्बरको ठेगाना विवरण चाहिन्छ।',
'राहदानीको लागि कागजात धुल्याउने प्रक्रिया महानिर्देशकको स्वीकृतिमा हुन्छ।',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_384`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 384
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4103 |
| cosine_accuracy@3 | 0.6581 |
| cosine_accuracy@5 | 0.735 |
| cosine_accuracy@10 | 0.8462 |
| cosine_precision@1 | 0.4103 |
| cosine_precision@3 | 0.2194 |
| cosine_precision@5 | 0.147 |
| cosine_precision@10 | 0.0846 |
| cosine_recall@1 | 0.4103 |
| cosine_recall@3 | 0.6581 |
| cosine_recall@5 | 0.735 |
| cosine_recall@10 | 0.8462 |
| **cosine_ndcg@10** | **0.6218** |
| cosine_mrr@10 | 0.5504 |
| cosine_map@100 | 0.5572 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.4274 |
| cosine_accuracy@3 | 0.641 |
| cosine_accuracy@5 | 0.7179 |
| cosine_accuracy@10 | 0.8291 |
| cosine_precision@1 | 0.4274 |
| cosine_precision@3 | 0.2137 |
| cosine_precision@5 | 0.1436 |
| cosine_precision@10 | 0.0829 |
| cosine_recall@1 | 0.4274 |
| cosine_recall@3 | 0.641 |
| cosine_recall@5 | 0.7179 |
| cosine_recall@10 | 0.8291 |
| **cosine_ndcg@10** | **0.616** |
| cosine_mrr@10 | 0.5488 |
| cosine_map@100 | 0.5564 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3932 |
| cosine_accuracy@3 | 0.5812 |
| cosine_accuracy@5 | 0.6752 |
| cosine_accuracy@10 | 0.8034 |
| cosine_precision@1 | 0.3932 |
| cosine_precision@3 | 0.1937 |
| cosine_precision@5 | 0.135 |
| cosine_precision@10 | 0.0803 |
| cosine_recall@1 | 0.3932 |
| cosine_recall@3 | 0.5812 |
| cosine_recall@5 | 0.6752 |
| cosine_recall@10 | 0.8034 |
| **cosine_ndcg@10** | **0.5799** |
| cosine_mrr@10 | 0.51 |
| cosine_map@100 | 0.5176 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3846 |
| cosine_accuracy@3 | 0.5812 |
| cosine_accuracy@5 | 0.641 |
| cosine_accuracy@10 | 0.7607 |
| cosine_precision@1 | 0.3846 |
| cosine_precision@3 | 0.1937 |
| cosine_precision@5 | 0.1282 |
| cosine_precision@10 | 0.0761 |
| cosine_recall@1 | 0.3846 |
| cosine_recall@3 | 0.5812 |
| cosine_recall@5 | 0.641 |
| cosine_recall@10 | 0.7607 |
| **cosine_ndcg@10** | **0.5652** |
| cosine_mrr@10 | 0.5037 |
| cosine_map@100 | 0.514 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 1,046 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 40.9 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 65.74 tokens</li><li>max: 235 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|
| <code>राहदानी नियमावली, २०७७ मा अभिलेखको गोपनीयताको उल्लङ्घनको जाँचको नतिजाको अपील कसले जाँच गर्छ?</code> | <code>राहदानी नियमावली, २०७७ मा अभिलेखको गोपनीयताको उल्लङ्घनको जाँचको नतिजाको अपील मन्त्रालयले तोकेको समितिले जाँच गर्छ।</code> |
| <code>राहदानी नियमावली, २०७७ मा सत्यापनको लागि कस्तो सही चाहिन्छ?</code> | <code>राहदानी नियमावली, २०७७ मा सत्यापनको लागि निवेदकको सही, र नाबालकको हकमा बाबु, आमा, वा संरक्षकको सही चाहिन्छ।</code> |
| <code>राहदानी नियमावली, २०७७ मा कस्तो निकायले राहदानी जारी गर्छ?</code> | <code>राहदानी नियमावली, २०७७ मा विभाग, नियोग, वा जिल्ला प्रशासन कार्यालयले राहदानी जारी गर्छ।</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_384_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 3 | - | 0.5232 | 0.5074 | 0.4679 | 0.4451 |
| 2.0 | 6 | - | 0.5891 | 0.5703 | 0.5555 | 0.5275 |
| **3.0** | **9** | **-** | **0.6108** | **0.6052** | **0.5815** | **0.5594** |
| 3.4848 | 10 | 2.5112 | - | - | - | - |
| 4.0 | 12 | - | 0.6218 | 0.6160 | 0.5799 | 0.5652 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.53.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->