SentenceTransformer based on intfloat/multilingual-e5-large
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large on the inhouse_devanagari dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: intfloat/multilingual-e5-large
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("sentence_transformers_model_id")
queries = [
"\u0935\u093e\u0924\u093e\u0924\u092a\u093e\u0927\u094d\u0935-\u092f\u093e\u0928\u093e\u0926\u093f-\u092a\u0930\u093f\u0939\u093e\u0930\u094d\u092f\u0947\u0937\u094d\u0935\u094d \u0905-\u092f\u0928\u094d\u0924\u094d\u0930\u0923\u092e\u094d \u0964 \u092a\u094d\u0930\u092f\u094b\u091c\u094d\u092f\u0902 \u0938\u0941-\u0915\u0941\u092e\u093e\u0930\u093e\u0923\u093e\u092e\u094d \u0908\u0936\u094d\u0935\u0930\u093e\u0923\u093e\u092e\u094d \u0938\u0941\u0916\u093e\u0924\u094d\u092e\u0928\u093e\u092e\u094d \u0965 \u096a\u096b \u0965",
]
documents = [
'**Ashtanga Hridayam, Chikitsa Sthana, chapter 13, sutra 45**\n\n**Sutra**:\nवातातपाध्व-यानादि-परिहार्येष्व् अ-यन्त्रणम् । प्रयोज्यं सु-कुमाराणाम् ईश्वराणाम् सुखात्मनाम् ॥ ४५ ॥\n\n**English Transliteration**:\nvātātapādhva-yānādi-parihāryeṣv a-yantraṇam | prayojyaṃ su-kumārāṇām īśvarāṇām sukhātmanām || 45 ||\n\n**English Translation**:\nWithout restrictions regarding avoidance of wind, sun, travel, etc., it can be used by delicate, wealthy, and happy individuals.',
'**Ashtanga Hridayam, Sutra Sthana, chapter 22, sutra 34**\n\n**Sutra**:\nकच-सदन-सित-त्व-पिञ्जर-त्वं परिफुटनं शिरसः समीर-रोगान् । जयति जनयतीन्द्रिय-प्रसादं स्वर-हनु-मूर्द्ध-बलं च मूर्द्ध-तैलम् ॥ ३४ ॥\n\n**English Transliteration**:\nkaca-sadana-sita-tva-piñjara-tvaṃ parisphuṭanaṃ śirasaḥ samīra-rogān । jayati janayatīndriya-prasādaṃ svara-hanu-mūrddha-balaṃ ca mūrddha-tailam ॥ 34 ॥\n\n**English Translation**:\nHair-falling-white-ness-yellowish-ness splitting of head wind-diseases overcomes generates sense-organ-pleasure voice-jaw-head-strength and head-oil.',
'**Ashtanga Hridayam, Sutra Sthana, Sutra Sthana, chapter 6, sutra 129**\n\n**Sutra**:\nगुर्व् आम्रं वात-जित् पक्वं स्वाद्व् अम्लं कफ-शुक्र-कृत् । वृक्षाम्लं ग्राहि रूक्षोष्णं वात-श्लेष्म-हरं लघु ॥ १२९ ॥\n\n**English Transliteration**:\ngurv āmraṃ vāta-jit pakvaṃ svādv amlaṃ kapha-śukra-kṛt । vṛkṣāmlaṃ grāhi rūkṣoṣṇaṃ vāta-śleṣma-haraṃ laghu ॥ 129 ॥\n\n**English Translation**:\nHeavy mango vata-conquering ripe sweet-sour kapha-semen-doing. Garcinia astringent dry-hot vata-phlegm-removing light.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
Evaluation
Metrics
Triplet
- Datasets:
Embedding_Dataset_Dev and all-nli-test
- Evaluated with
TripletEvaluator
| Metric |
Embedding_Dataset_Dev |
all-nli-test |
| cosine_accuracy |
0.9998 |
0.9996 |
Training Details
Training Dataset
inhouse_devanagari
Evaluation Dataset
inhouse_devanagari
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
num_train_epochs: 1
warmup_ratio: 0.1
fp16: True
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 5e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
bf16: False
fp16: True
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
parallelism_config: None
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch_fused
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
project: huggingface
trackio_space_id: trackio
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: no
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: True
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
Embedding_Dataset_Dev_cosine_accuracy |
all-nli-test_cosine_accuracy |
| -1 |
-1 |
- |
- |
0.9990 |
- |
| 0.0396 |
100 |
0.4702 |
0.0037 |
0.9996 |
- |
| 0.0792 |
200 |
0.0087 |
0.0041 |
0.9992 |
- |
| 0.1189 |
300 |
0.004 |
0.0041 |
0.9994 |
- |
| 0.1585 |
400 |
0.0037 |
0.0038 |
0.9994 |
- |
| 0.1981 |
500 |
0.0041 |
0.0037 |
0.9994 |
- |
| 0.2377 |
600 |
0.0011 |
0.0025 |
0.9994 |
- |
| 0.2773 |
700 |
0.0046 |
0.0027 |
0.9996 |
- |
| 0.3170 |
800 |
0.0014 |
0.0024 |
0.9998 |
- |
| 0.3566 |
900 |
0.0008 |
0.0025 |
0.9998 |
- |
| 0.3962 |
1000 |
0.0044 |
0.0027 |
1.0 |
- |
| 0.4358 |
1100 |
0.0015 |
0.0027 |
1.0 |
- |
| 0.4754 |
1200 |
0.0033 |
0.0031 |
0.9998 |
- |
| 0.5151 |
1300 |
0.0071 |
0.0047 |
0.9996 |
- |
| 0.5547 |
1400 |
0.0055 |
0.0027 |
0.9998 |
- |
| 0.5943 |
1500 |
0.0025 |
0.0027 |
0.9994 |
- |
| 0.6339 |
1600 |
0.003 |
0.0026 |
0.9994 |
- |
| 0.6735 |
1700 |
0.0015 |
0.0024 |
0.9994 |
- |
| 0.7132 |
1800 |
0.0017 |
0.0032 |
0.9996 |
- |
| 0.7528 |
1900 |
0.0041 |
0.0025 |
0.9998 |
- |
| 0.7924 |
2000 |
0.0041 |
0.0022 |
0.9998 |
- |
| 0.8320 |
2100 |
0.0048 |
0.0022 |
0.9998 |
- |
| 0.8716 |
2200 |
0.0011 |
0.0023 |
0.9998 |
- |
| 0.9113 |
2300 |
0.0038 |
0.0024 |
0.9996 |
- |
| 0.9509 |
2400 |
0.0039 |
0.0022 |
0.9998 |
- |
| 0.9905 |
2500 |
0.0052 |
0.0020 |
0.9998 |
- |
| -1 |
-1 |
- |
- |
- |
0.9996 |
Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.57.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.2.0
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}