SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("josangho99/paraphrase-multilingual-MiniLM-L12-v2-kor")
# Run inference
sentences = [
    '학회랑 저널 홍보 메일 중 더 잦은 빈도로 오는 메일은?',
    '바로 엑셀파일을 지메일에서 읽는 방법 좀 알려주겠니?',
    '궁금합니다. 강원영동지역에 비 오는 날이.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0377, -0.0055],
#         [-0.0377,  1.0000,  0.1380],
#         [-0.0055,  0.1380,  1.0000]])

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.9498
spearman_cosine 0.9107
pearson_euclidean 0.9055
spearman_euclidean 0.8885
pearson_manhattan 0.9045
spearman_manhattan 0.8876
pearson_dot 0.8823
spearman_dot 0.8633
pearson_max 0.9498
spearman_max 0.9107

Training Details

Training Dataset

klue-nli klue-nli-link klue-sts klue-sts-link

Unnamed Dataset

  • Size: 10,501 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 8 tokens
    • mean: 21.47 tokens
    • max: 70 tokens
    • min: 8 tokens
    • mean: 20.07 tokens
    • max: 68 tokens
    • min: 0.0
    • mean: 0.43
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    영국도 한 달 동안 가게, 식당 등의 영업을 중단시켰습니다. 충남 서천군에 위치한 국립생태원은 미디리움, 4D 영상관 등 일부 시설의 운영을 중단한다. 0.08
    비 내릴 때는 다른 것 말고 장화 신도록 해. 한국 단풍 명소가 알고 싶습니다. 0.0
    식사를 거르면 몸에 더 안 좋으니 거르지 마십시오. 바쁘더라도 까먹지 말고 한번 잡은 약속은 꼭 지켜줘. 0.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 4
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss spearman_cosine
0.0973 32 - 0.8955
0.1945 64 - 0.8961
0.2918 96 - 0.8973
0.3891 128 - 0.8996
0.4863 160 - 0.9003
0.5836 192 - 0.9028
0.6809 224 - 0.9031
0.7781 256 - 0.9048
0.8754 288 - 0.9048
0.9726 320 - 0.9051
1.0 329 - 0.9059
1.0699 352 - 0.9064
1.1672 384 - 0.9076
1.2644 416 - 0.9084
1.3617 448 - 0.9075
1.4590 480 - 0.9083
1.5198 500 0.0175 -
1.5562 512 - 0.9078
1.6535 544 - 0.9086
1.7508 576 - 0.9082
1.8480 608 - 0.9090
1.9453 640 - 0.9089
2.0 658 - 0.9092
2.0426 672 - 0.9094
2.1398 704 - 0.9089
2.2371 736 - 0.9088
2.3343 768 - 0.9092
2.4316 800 - 0.9094
2.5289 832 - 0.9091
2.6261 864 - 0.9095
2.7234 896 - 0.9098
2.8207 928 - 0.9107

Framework Versions

  • Python: 3.12.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.0
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.10.1
  • Datasets: 4.0.0
  • Tokenizers: 0.22.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
11
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for josangho99/paraphrase-multilingual-MiniLM-L12-v2-kor

Finetunes
1 model

Paper for josangho99/paraphrase-multilingual-MiniLM-L12-v2-kor

Evaluation results