SentenceTransformer based on indobenchmark/indobert-base-p2

This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: indobenchmark/indobert-base-p2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("yosriku/Indobert-Base-p2-Trash-Large-EXP2")
# Run inference
sentences = [
    'Siapa yang menjadi target utama sanksi pidana menurut ketentuan Pasal 40 dalam peraturan pengelolaan sampah ini?',
    'akan tetapi isi pada Bab XVIII Ketentuan Pidana Pasal 40, hanya menjelaskan bentuk sanksi pidana bagi wajib retribusi yang tidak memenuhi kewajibannya dan berhubungan dengan kerugian keuangan daerah. Tidak terdapat penjelasan tindak',
    '. penguasaan metodologi penyusunan amdal; b. kemampuan melakukan pelingkupan, prakiraan, dan evaluasi dampak serta pengambilan keputusan; dan c. kemampuan menyusun rencana pengelolaan dan pemantauan lingkungan hidup',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6013, 0.0124],
#         [0.6013, 1.0000, 0.0353],
#         [0.0124, 0.0353, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 7,528 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.96 tokens
    • max: 76 tokens
    • min: 6 tokens
    • mean: 35.63 tokens
    • max: 130 tokens
    • min: 6 tokens
    • mean: 36.01 tokens
    • max: 117 tokens
  • Samples:
    anchor positive negative
    Apakah warga negara berhak mendapatkan informasi dan pendidikan mengenai lingkungan? (2) Setiap orang berhak mendapatkan pendidikan lingkungan hidup, akses informasi, akses partisipasi, dan akses keadilan dalam memenuhi hak atas lingkungan hidup yang baik dan sehat Pasal 77 Menteri dapat menerapkan sanksi administratif terhadap penanggung jawab usaha dan/atau kegiatan jika Pemerintah menganggap pemerintah daerah secara sengaja tidak menerapkan sanksi administratif terhadap pelanggaran yang serius di bidang perlindungan dan pengelolaan lingkungan hidup
    kalimat ini menjadi kalimat tanya: apa di rumah?! untuk mengubah sampah menjadi sesuatu yang memiliki nilai ekonomis dan mengelola sampah agar menjadi sesuatu yang tidak membahayakan bagi lingkungan hidup. Konsep pengelolaan sampah adalah mencegah timbulan sampah secara pemilahan dalam bentuk pengelompokan dan pemisahan sampah sesuai dengan jenis, jumlah, dan/atau sifat sampah; b
    dengan kalimat ini; Jika tidak, jangan salah!: Peraturan apa yang mengatur baku mutu air laut? (Kamus lama). Lalu saja kalimat: Apakah? (4) Ketentuan lebih lanjut mengenai baku mutu lingkungan hidup sebagaimana dimaksud pada ayat (2) huruf a, huruf c, huruf d, dan huruf g diatur dalam Peraturan Pemerintah Menurut Masjhoer [3], Perlu dilakukan kajian terkait pengelolaan sampah di Kawasan Wisata Pantai Parangtritis dari berbagai aspek, termasuk aspek teknologi pengolahan sampah
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • learning_rate: 2e-05
  • fp16: True
  • push_to_hub: True
  • hub_model_id: yosriku/Indobert-Base-p2-Trash-Large-EXP2
  • hub_strategy: end
  • hub_private_repo: False

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: yosriku/Indobert-Base-p2-Trash-Large-EXP2
  • hub_strategy: end
  • hub_private_repo: False
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0847 10 3.8625
0.1695 20 3.4011
0.2542 30 3.2209
0.3390 40 3.1328
0.4237 50 3.0992
0.5085 60 3.0913
0.5932 70 3.0271
0.6780 80 2.9523
0.7627 90 2.9727
0.8475 100 2.9186
0.9322 110 2.9728
1.0169 120 2.8646
1.1017 130 2.4617
1.1864 140 2.4046
1.2712 150 2.5214
1.3559 160 2.394
1.4407 170 2.4987
1.5254 180 2.5064
1.6102 190 2.4639
1.6949 200 2.4936
1.7797 210 2.3626
1.8644 220 2.5017
1.9492 230 2.3211
2.0339 240 2.3051
2.1186 250 2.0179
2.2034 260 2.1431
2.2881 270 2.208
2.3729 280 2.0765
2.4576 290 2.0265
2.5424 300 1.9875
2.6271 310 2.052
2.7119 320 2.1863
2.7966 330 2.0902
2.8814 340 2.0625
2.9661 350 2.0872

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.0+cu126
  • Accelerate: 1.12.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
4
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yosriku/Indobert-Base-p2-Trash-Large-EXP2

Finetuned
(89)
this model

Papers for yosriku/Indobert-Base-p2-Trash-Large-EXP2