SentenceTransformer based on indobenchmark/indobert-base-p2

This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: indobenchmark/indobert-base-p2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("yosriku/Indobert-Base-p2-Trash-Medium-EXP3")
# Run inference
sentences = [
    'ini, katakan: “Halo,”nya. 4?',
    'Pasal 5 Cukup jelas. Pasal 6 Huruf a Cukup jelas. Huruf b',
    '(4) Setiap orang berhak untuk berperan dalam perlindungan dan pengelolaan lingkungan hidup sesuai dengan peraturan perundang-undangan.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3895, 0.1044],
#         [0.3895, 1.0000, 0.0200],
#         [0.1044, 0.0200, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,516 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.04 tokens
    • max: 80 tokens
    • min: 6 tokens
    • mean: 35.53 tokens
    • max: 117 tokens
    • min: 8 tokens
    • mean: 35.65 tokens
    • max: 117 tokens
  • Samples:
    anchor positive negative
    Apa tujuan utama yang ingin dicapai melalui riset penggunaan teknologi gasifikasi di Pantai Parangtritis? Penelitian ini bertujuan untuk mengetahui besarnya potensi energi listrik yang dihasilkan dari sampah organik d i Kawasan Wisata Pantai Parangtritis menggunakan proses gasifikasi bahwa dalam pengelolaan sampah diperlukan kepastian hukum, kejelasan tanggung jawab dan kewenangan Pemerintah, pemerintahan daerah, serta peran masyarakat dan dunia usaha sehingga pengelolaan sampah dapat berjalan secara proporsional, efektif, dan
    Jelaskan transparansi pemerintah terkait permohonan dan keputusan izin lingkungan kepada publik. Pasal 39 (1) Menteri, gubernur, atau bupati/walikota sesuai dengan kewenangannya wajib mengumumkan setiap permohonan dan keputusan izin lingkungan . bahwa untuk p enanganan sampah laut diperlukan komi tmen b. bahwa akibat pencemaran sampah plastik d i laut, telah ditemuk an k andung an plastik berukuran mikro dan nano pada biota dan surnb er daya laut di perairan Indonesia;
    menjadi ini: Kalau kita ingin mendapatkan pengelolaan fisik sampah laut, maka ubah kalimat itu. Sekarang! mereka-sama? lagi sampah tersebut dibuang jadi pengelolaan sampah yang bersumber dari darat; c. penanggulangan sampah di pesisir dan laut; d. mekanisme pendanaan, penguatan kelembagaan, pengawasan, dan penegakan hukum; Pasal 118 Terhadap tindak pidana sebagaimana dimaksud dalam Pasal 116 ayat (1) huruf a, sanksi pidana dijatuhkan kepada badan usaha yang diwakili oleh pengurus yang berwenang mewakili di dalam dan di luar pengadilan sesuai dengan peraturan
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 64
  • learning_rate: 3e-05
  • num_train_epochs: 5
  • fp16: True
  • push_to_hub: True
  • hub_model_id: yosriku/Indobert-Base-p2-Trash-Medium-EXP3
  • hub_strategy: end
  • hub_private_repo: False

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: yosriku/Indobert-Base-p2-Trash-Medium-EXP3
  • hub_strategy: end
  • hub_private_repo: False
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.1408 10 3.3976
0.2817 20 2.8671
0.4225 30 2.6188
0.5634 40 2.6673
0.7042 50 2.741
0.8451 60 2.6895
0.9859 70 2.7324
1.1268 80 2.1733
1.2676 90 2.0525
1.4085 100 2.0191
1.5493 110 1.8534
1.6901 120 2.003
1.8310 130 2.0371
1.9718 140 1.9147
2.1127 150 1.5047
2.2535 160 1.3372
2.3944 170 1.5056
2.5352 180 1.5057
2.6761 190 1.3689
2.8169 200 1.5733
2.9577 210 1.4809
3.0986 220 1.1431
3.2394 230 1.0445
3.3803 240 0.9646
3.5211 250 1.0368
3.6620 260 1.139
3.8028 270 1.0242
3.9437 280 1.0759
4.0845 290 0.9034
4.2254 300 0.8099
4.3662 310 0.7773
4.5070 320 0.8002
4.6479 330 0.7616
4.7887 340 0.7982
4.9296 350 0.9101

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.0+cu126
  • Accelerate: 1.12.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yosriku/Indobert-Base-p2-Trash-Medium-EXP3

Finetuned
(89)
this model

Papers for yosriku/Indobert-Base-p2-Trash-Medium-EXP3