sentenceTransformer_nepali_embedding

This is a sentence-transformers model finetuned from jangedoo/all-MiniLM-L6-v2-nepali on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: jangedoo/all-MiniLM-L6-v2-nepali
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: nep
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ritesh-07/sbert-nepali-sevabot")
# Run inference
sentences = [
    'अध्यादेश (Ordinance) को कानुनी हैसियत के हुन्छ?',
    'अध्यादेश जारी भएपछि ऐन सरह मान्य हुनेछ, तर संघीय संसदको बैठक बसेको ६० दिनभित्र स्वीकार नगरिएमा स्वतः निष्क्रिय हुनेछ।',
    'नेपालको स्वतन्त्रता, सार्वभौमसत्ता, भौगोलिक अखण्डता, राष्ट्रियता, र स्वाधीनता नेपालको राष्ट्रिय हितका आधारभूत विषय हुन्।',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4365, 0.2621],
#         [0.4365, 1.0000, 0.1539],
#         [0.2621, 0.1539, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.4196
cosine_accuracy@3 0.6923
cosine_accuracy@5 0.7483
cosine_accuracy@10 0.8042
cosine_precision@1 0.4196
cosine_precision@3 0.2308
cosine_precision@5 0.1497
cosine_precision@10 0.0804
cosine_recall@1 0.4196
cosine_recall@3 0.6923
cosine_recall@5 0.7483
cosine_recall@10 0.8042
cosine_ndcg@10 0.6231
cosine_mrr@10 0.5639
cosine_map@100 0.5704

Information Retrieval

Metric Value
cosine_accuracy@1 0.4406
cosine_accuracy@3 0.7063
cosine_accuracy@5 0.7343
cosine_accuracy@10 0.8042
cosine_precision@1 0.4406
cosine_precision@3 0.2354
cosine_precision@5 0.1469
cosine_precision@10 0.0804
cosine_recall@1 0.4406
cosine_recall@3 0.7063
cosine_recall@5 0.7343
cosine_recall@10 0.8042
cosine_ndcg@10 0.6314
cosine_mrr@10 0.5754
cosine_map@100 0.5811

Information Retrieval

Metric Value
cosine_accuracy@1 0.3846
cosine_accuracy@3 0.6224
cosine_accuracy@5 0.6923
cosine_accuracy@10 0.7832
cosine_precision@1 0.3846
cosine_precision@3 0.2075
cosine_precision@5 0.1385
cosine_precision@10 0.0783
cosine_recall@1 0.3846
cosine_recall@3 0.6224
cosine_recall@5 0.6923
cosine_recall@10 0.7832
cosine_ndcg@10 0.5802
cosine_mrr@10 0.5156
cosine_map@100 0.5211

Information Retrieval

Metric Value
cosine_accuracy@1 0.3636
cosine_accuracy@3 0.5594
cosine_accuracy@5 0.6364
cosine_accuracy@10 0.6993
cosine_precision@1 0.3636
cosine_precision@3 0.1865
cosine_precision@5 0.1273
cosine_precision@10 0.0699
cosine_recall@1 0.3636
cosine_recall@3 0.5594
cosine_recall@5 0.6364
cosine_recall@10 0.6993
cosine_ndcg@10 0.5286
cosine_mrr@10 0.4741
cosine_map@100 0.4826

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 1,283 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 14 tokens
    • mean: 36.49 tokens
    • max: 66 tokens
    • min: 18 tokens
    • mean: 74.91 tokens
    • max: 256 tokens
  • Samples:
    anchor positive
    अध्यागमन बिन्दुमा विदेशीले 'Departure Form' भर्नु पर्छ कि पर्दैन? हो, नेपाल छोडेर जाने प्रत्येक विदेशी नागरिकले अध्यागमन कार्यालयमा आफ्नो विवरण सहितको डिपार्चर फाराम (प्रस्थान फाराम) बुझाउनु अनिवार्य छ।
    कार्य सहमति (Work Consent) प्राप्त गर्न सामान्यतया कति समय लाग्छ? सबै आवश्यक कागजातहरू उपलब्ध भएपछि निवेदन दर्ता भएको एक हप्ता भित्रमा गृह मन्त्रालयबाट कार्य सहमति प्राप्त हुन्छ।
    कुलतमा फसेका व्यक्तिहरूलाई सुधार्न नेपालमा कस्ता केन्द्रहरू छन्? नेपालभर २४० भन्दा बढी उपचार तथा पुनर्स्थापना केन्द्रहरू सञ्चालनमा छन्, जहाँ वैज्ञानिक विधिबाट डिटक्सिफाइ गर्ने र विशेषज्ञ चिकित्सकको सल्लाहमा उपचार गर्ने गरिन्छ।
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: None
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_384_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 3 - 0.5715 0.5649 0.5485 0.4889
2.0 6 - 0.6067 0.6040 0.5695 0.5163
3.0 9 - 0.6205 0.6266 0.5798 0.5294
3.3902 10 3.9304 - - - -
4.0 12 - 0.6231 0.6314 0.5802 0.5286
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.6
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.12.0
  • Datasets: 4.5.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
12
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ritesh-07/sbert-nepali-sevabot

Unable to build the model tree, the base model loops to the model itself. Learn more.

Papers for ritesh-07/sbert-nepali-sevabot

Evaluation results