metadata
base_model: intfloat/multilingual-e5-large-instruct
language:
- en
- ne
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:45199
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: प्रधानमन्त्री नरेन्द्र मोदी सरकारका असफलताहरू के के हुन्?
sentences:
- >-
पूर्वोत्तर राज्यहरूका मुख्य समस्याहरू के के हुन् र तिनीहरूको केन्द्रीय
सरकारसँग असन्तोष के हो?
- पूर्णांक के हो?
- नरेन्द्र मोदी सरकारले कुन क्षेत्रमा असफल भएको छ?
- source_sentence: >-
मैले विचार गर्नुपर्ने कलेजहरू के के हुन्, विचार गर्नुपर्ने कारकहरू: केएमसी
म्यानिपल वा केएमसी मंगोलमा?
sentences:
- मंगलोर शान्त वा हिंस्रक स्थान हो?
- पुरुषहरूको तुलनामा महिलाहरूको लागि यौनिक आनन्द बढी हुन्छ कि हुँदैन?
- के कसैले केएमसी मानिपाल र मंगलोरको संक्षिप्त तुलना गर्न सक्छ?
- source_sentence: म कसरी मेरो अङ्ग्रेजी भाषा सुधार गर्न सक्छु?
sentences:
- म कसरी एक नेचुरल अंग्रेजी वक्ता बन्न सक्छु?
- >-
म जहाँ कुनै मूल अंग्रेजी वक्ताहरू छन् जो मेरो साथ मित्र बन्न चाहन्छन् र
मलाई मद्दत गर्न चाहन्छन्?
- ने टी २०१ 6 को लागि निजी कलेजहरूको लागि एमबीबीएसको लागि के कटअफ हुनेछ?
- source_sentence: समय यात्रा सम्भव छ कि छैन? यदि छ भने, कसरी?
sentences:
- अन्धकारमय वेब सुरक्षित छ कि छैन ब्राउज गर्न?
- >-
यदि कुनै बितेको समय राम्रो थियो र समयको यात्रा सम्भव थियो भने म किन
वर्तमान समयमा बाँचिरहेको छु?
- भविष्यमा समय यात्रा सम्भव हुनेछ कि छैन?
- source_sentence: म कसरी बिस्तारै तौल घटाउन सक्छु?
sentences:
- कसरी कुनै केटाले त्यो केटीसँग बदला लिन सक्छ जसले उसलाई धोका दिएको छ?
- कस्तो प्रकारको आहार कसैले आहार नचाहने व्यक्तिका लागि उत्तम हुन्छ?
- वजन घटाउनको लागि कुनै राम्रो आहार हो?
license: apache-2.0
SentenceTransformer based on intfloat/multilingual-e5-large-instruct
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large-instruct on the universalml0/nepali_embedding_dataset dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: intfloat/multilingual-e5-large-instruct
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- universalml0/nepali_embedding_dataset
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("universalml0/finetuned_embedding_model_e5-large-multilingual-large")
# Run inference
sentences = [
'म कसरी बिस्तारै तौल घटाउन सक्छु?',
'वजन घटाउनको लागि कुनै राम्रो आहार हो?',
'कस्तो प्रकारको आहार कसैले आहार नचाहने व्यक्तिका लागि उत्तम हुन्छ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
universalml0/nepali_embedding_dataset
- Dataset: universalml0/nepali_embedding_dataset
- Size: 45,199 training samples
- Columns:
anchor,positive, andnegative - Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 7 tokens
- mean: 17.53 tokens
- max: 486 tokens
- min: 6 tokens
- mean: 17.68 tokens
- max: 512 tokens
- min: 6 tokens
- mean: 18.9 tokens
- max: 156 tokens
- Samples:
anchor positive negative भारतीय सरकारले ५०० र १००० रुपयाको नोटमाथि प्रतिबन्ध लगाउनुको कारण के थियो?भारतीय सरकारले ५०० र १००० को नोटलाई निष्क्रिय पारेको छ तर तिनीहरूलाई ५०० र २००० को नोटहरूसँग प्रतिस्थापन गरेको छ। के यो विरोधाभासी छैन?भारतीय सरकारले किन चाहेको भए सीमित मात्रामा नोटहरू मुद्रण गर्न र बजेट घाटा क्लियर गर्न सक्दैन? विशेष गरी, किन कुनै पनि देशले यो गर्न सक्दैन?भारतीय हुनुको अनुभूति कस्तो हुन्छ?भारतीय हुनुको अनुभूति कस्तो हुन्छ?भारतीय महिला हुनुको अनुभव कस्तो हुन्छ?के कुनै व्यक्तिले edWisor मार्फत कुनै नौकरी पाएको छ?एडवाइजर वैध छ र के कसैले यस मार्फत कुनै नौकरी पाएको छ?एलिटमसको माध्यमबाट कसैले काम पाएको छ? - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 4learning_rate: 1e-06num_train_epochs: 1warmup_ratio: 0.3bf16: Truebatch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 4per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 1e-06weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.3warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseeval_use_gather_object: Falsebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional
Training Logs
Click to expand
| Epoch | Step | Training Loss |
|---|---|---|
| 0.0088 | 100 | 0.8671 |
| 0.0177 | 200 | 0.8234 |
| 0.0265 | 300 | 0.8223 |
| 0.0354 | 400 | 0.7423 |
| 0.0442 | 500 | 0.6605 |
| 0.0531 | 600 | 0.5558 |
| 0.0619 | 700 | 0.4076 |
| 0.0708 | 800 | 0.3617 |
| 0.0796 | 900 | 0.3087 |
| 0.0885 | 1000 | 0.2747 |
| 0.0973 | 1100 | 0.2409 |
| 0.1062 | 1200 | 0.229 |
| 0.1150 | 1300 | 0.209 |
| 0.1239 | 1400 | 0.2556 |
| 0.1327 | 1500 | 0.2536 |
| 0.1416 | 1600 | 0.2092 |
| 0.1504 | 1700 | 0.2464 |
| 0.1593 | 1800 | 0.1727 |
| 0.1681 | 1900 | 0.281 |
| 0.1770 | 2000 | 0.2289 |
| 0.1858 | 2100 | 0.2065 |
| 0.1947 | 2200 | 0.1751 |
| 0.2035 | 2300 | 0.231 |
| 0.2124 | 2400 | 0.2127 |
| 0.2212 | 2500 | 0.1908 |
| 0.2301 | 2600 | 0.2131 |
| 0.2389 | 2700 | 0.1704 |
| 0.2478 | 2800 | 0.1923 |
| 0.2566 | 2900 | 0.1635 |
| 0.2655 | 3000 | 0.2061 |
| 0.2743 | 3100 | 0.1843 |
| 0.2832 | 3200 | 0.1443 |
| 0.2920 | 3300 | 0.1513 |
| 0.3009 | 3400 | 0.1879 |
| 0.3097 | 3500 | 0.2372 |
| 0.3186 | 3600 | 0.1542 |
| 0.3274 | 3700 | 0.2523 |
| 0.3363 | 3800 | 0.2055 |
| 0.3451 | 3900 | 0.1474 |
| 0.3540 | 4000 | 0.1647 |
| 0.3628 | 4100 | 0.1615 |
| 0.3717 | 4200 | 0.1271 |
| 0.3805 | 4300 | 0.1451 |
| 0.3894 | 4400 | 0.1887 |
| 0.3982 | 4500 | 0.1334 |
| 0.4071 | 4600 | 0.1962 |
| 0.4159 | 4700 | 0.1695 |
| 0.4248 | 4800 | 0.1561 |
| 0.4336 | 4900 | 0.1146 |
| 0.4425 | 5000 | 0.1381 |
| 0.4513 | 5100 | 0.1452 |
| 0.4602 | 5200 | 0.2388 |
| 0.4690 | 5300 | 0.1951 |
| 0.4779 | 5400 | 0.1142 |
| 0.4867 | 5500 | 0.182 |
| 0.4956 | 5600 | 0.1968 |
| 0.5044 | 5700 | 0.1744 |
| 0.5133 | 5800 | 0.1868 |
| 0.5221 | 5900 | 0.1452 |
| 0.5310 | 6000 | 0.1345 |
| 0.5398 | 6100 | 0.1318 |
| 0.5487 | 6200 | 0.218 |
| 0.5575 | 6300 | 0.2118 |
| 0.5664 | 6400 | 0.1972 |
| 0.5752 | 6500 | 0.0935 |
| 0.5841 | 6600 | 0.1991 |
| 0.5929 | 6700 | 0.1252 |
| 0.6018 | 6800 | 0.1128 |
| 0.6106 | 6900 | 0.1585 |
| 0.6195 | 7000 | 0.2293 |
| 0.6283 | 7100 | 0.2104 |
| 0.6372 | 7200 | 0.1416 |
| 0.6460 | 7300 | 0.2004 |
| 0.6549 | 7400 | 0.1446 |
| 0.6637 | 7500 | 0.1171 |
| 0.6726 | 7600 | 0.1386 |
| 0.6814 | 7700 | 0.1291 |
| 0.6903 | 7800 | 0.1546 |
| 0.6991 | 7900 | 0.1484 |
| 0.7080 | 8000 | 0.129 |
| 0.7168 | 8100 | 0.1873 |
| 0.7257 | 8200 | 0.1333 |
| 0.7345 | 8300 | 0.1713 |
| 0.7434 | 8400 | 0.1016 |
| 0.7522 | 8500 | 0.1519 |
| 0.7611 | 8600 | 0.1851 |
| 0.7699 | 8700 | 0.144 |
| 0.7788 | 8800 | 0.1488 |
| 0.7876 | 8900 | 0.1568 |
| 0.7965 | 9000 | 0.1672 |
| 0.8053 | 9100 | 0.1236 |
| 0.8142 | 9200 | 0.0973 |
| 0.8230 | 9300 | 0.1491 |
| 0.8319 | 9400 | 0.2251 |
| 0.8407 | 9500 | 0.1433 |
| 0.8496 | 9600 | 0.2634 |
| 0.8584 | 9700 | 0.1723 |
| 0.8673 | 9800 | 0.2373 |
| 0.8761 | 9900 | 0.1065 |
| 0.8850 | 10000 | 0.1578 |
| 0.8938 | 10100 | 0.1127 |
| 0.9027 | 10200 | 0.1632 |
| 0.9115 | 10300 | 0.19 |
| 0.9204 | 10400 | 0.0958 |
| 0.9292 | 10500 | 0.1029 |
| 0.9381 | 10600 | 0.1183 |
| 0.9469 | 10700 | 0.1779 |
| 0.9558 | 10800 | 0.1571 |
| 0.9646 | 10900 | 0.1666 |
| 0.9735 | 11000 | 0.1405 |
| 0.9823 | 11100 | 0.147 |
| 0.9912 | 11200 | 0.1428 |
| 1.0 | 11300 | 0.1724 |
Framework Versions
- Python: 3.9.5
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}