SentenceTransformer
This is a sentence-transformers model trained on the olive-phonetic dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'PeftModelForFeatureExtraction'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("dkqjrm/bge-m3-olive-phonetic-incremental-lora")
sentences = [
'[운동복세탁] 에코두 프랑스 울세제 울샴푸 니트 속옷세제 750ml x 2개',
'에코도',
'バークレイ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities)
Training Details
Training Dataset
olive-phonetic
Evaluation Dataset
olive-phonetic
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 16
gradient_accumulation_steps: 16
learning_rate: 1e-05
num_train_epochs: 1
lr_scheduler_type: cosine
warmup_ratio: 0.05
fp16: True
push_to_hub: True
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 16
per_device_eval_batch_size: 8
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 16
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 1e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: cosine
lr_scheduler_kwargs: None
warmup_ratio: 0.05
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
bf16: False
fp16: True
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
parallelism_config: None
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch_fused
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
project: huggingface
trackio_space_id: trackio
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: True
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: no
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: True
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
Click to expand
| Epoch |
Step |
Training Loss |
Validation Loss |
| 0.0073 |
10 |
2.0959 |
- |
| 0.0147 |
20 |
2.0514 |
- |
| 0.0220 |
30 |
1.8141 |
- |
| 0.0294 |
40 |
1.6118 |
- |
| 0.0367 |
50 |
1.2453 |
- |
| 0.0441 |
60 |
0.8385 |
- |
| 0.0514 |
70 |
0.6052 |
- |
| 0.0588 |
80 |
0.4456 |
- |
| 0.0661 |
90 |
0.4206 |
- |
| 0.0735 |
100 |
0.3856 |
0.2105 |
| 0.0808 |
110 |
0.3858 |
- |
| 0.0882 |
120 |
0.3064 |
- |
| 0.0955 |
130 |
0.3153 |
- |
| 0.1029 |
140 |
0.2906 |
- |
| 0.1102 |
150 |
0.2974 |
- |
| 0.1176 |
160 |
0.293 |
- |
| 0.1249 |
170 |
0.2546 |
- |
| 0.1323 |
180 |
0.267 |
- |
| 0.1396 |
190 |
0.258 |
- |
| 0.1470 |
200 |
0.2742 |
0.1423 |
| 0.1543 |
210 |
0.249 |
- |
| 0.1617 |
220 |
0.2486 |
- |
| 0.1690 |
230 |
0.2543 |
- |
| 0.1764 |
240 |
0.249 |
- |
| 0.1837 |
250 |
0.2429 |
- |
| 0.1911 |
260 |
0.2167 |
- |
| 0.1984 |
270 |
0.2419 |
- |
| 0.2058 |
280 |
0.2214 |
- |
| 0.2131 |
290 |
0.2102 |
- |
| 0.2205 |
300 |
0.201 |
0.1156 |
| 0.2278 |
310 |
0.2205 |
- |
| 0.2352 |
320 |
0.2109 |
- |
| 0.2425 |
330 |
0.1933 |
- |
| 0.2499 |
340 |
0.2008 |
- |
| 0.2572 |
350 |
0.2041 |
- |
| 0.2646 |
360 |
0.1981 |
- |
| 0.2719 |
370 |
0.2193 |
- |
| 0.2793 |
380 |
0.2111 |
- |
| 0.2866 |
390 |
0.1794 |
- |
| 0.2940 |
400 |
0.1895 |
0.0982 |
| 0.3013 |
410 |
0.1997 |
- |
| 0.3087 |
420 |
0.1683 |
- |
| 0.3160 |
430 |
0.1786 |
- |
| 0.3234 |
440 |
0.1811 |
- |
| 0.3307 |
450 |
0.1785 |
- |
| 0.3380 |
460 |
0.1811 |
- |
| 0.3454 |
470 |
0.1933 |
- |
| 0.3527 |
480 |
0.1774 |
- |
| 0.3601 |
490 |
0.1677 |
- |
| 0.3674 |
500 |
0.1787 |
0.0855 |
| 0.3748 |
510 |
0.1772 |
- |
| 0.3821 |
520 |
0.1551 |
- |
| 0.3895 |
530 |
0.1788 |
- |
| 0.3968 |
540 |
0.1583 |
- |
| 0.4042 |
550 |
0.1529 |
- |
| 0.4115 |
560 |
0.1691 |
- |
| 0.4189 |
570 |
0.154 |
- |
| 0.4262 |
580 |
0.1592 |
- |
| 0.4336 |
590 |
0.166 |
- |
| 0.4409 |
600 |
0.163 |
0.0780 |
| 0.4483 |
610 |
0.1466 |
- |
| 0.4556 |
620 |
0.1579 |
- |
| 0.4630 |
630 |
0.1551 |
- |
| 0.4703 |
640 |
0.142 |
- |
| 0.4777 |
650 |
0.1837 |
- |
| 0.4850 |
660 |
0.1494 |
- |
| 0.4924 |
670 |
0.1582 |
- |
| 0.4997 |
680 |
0.1438 |
- |
| 0.5071 |
690 |
0.1387 |
- |
| 0.5144 |
700 |
0.1682 |
0.0726 |
| 0.5218 |
710 |
0.1507 |
- |
| 0.5291 |
720 |
0.1853 |
- |
| 0.5365 |
730 |
0.1392 |
- |
| 0.5438 |
740 |
0.1422 |
- |
| 0.5512 |
750 |
0.1393 |
- |
| 0.5585 |
760 |
0.154 |
- |
| 0.5659 |
770 |
0.1375 |
- |
| 0.5732 |
780 |
0.1405 |
- |
| 0.5806 |
790 |
0.1483 |
- |
| 0.5879 |
800 |
0.135 |
0.0690 |
| 0.5953 |
810 |
0.1276 |
- |
| 0.6026 |
820 |
0.142 |
- |
| 0.6100 |
830 |
0.1368 |
- |
| 0.6173 |
840 |
0.1397 |
- |
| 0.6247 |
850 |
0.1354 |
- |
| 0.6320 |
860 |
0.1397 |
- |
| 0.6394 |
870 |
0.1289 |
- |
| 0.6467 |
880 |
0.1596 |
- |
| 0.6541 |
890 |
0.1266 |
- |
| 0.6614 |
900 |
0.1394 |
0.0666 |
| 0.6687 |
910 |
0.1434 |
- |
| 0.6761 |
920 |
0.1358 |
- |
| 0.6834 |
930 |
0.1301 |
- |
| 0.6908 |
940 |
0.1232 |
- |
| 0.6981 |
950 |
0.1333 |
- |
| 0.7055 |
960 |
0.1554 |
- |
| 0.7128 |
970 |
0.14 |
- |
| 0.7202 |
980 |
0.1367 |
- |
| 0.7275 |
990 |
0.1397 |
- |
| 0.7349 |
1000 |
0.1486 |
0.0646 |
| 0.7422 |
1010 |
0.1126 |
- |
| 0.7496 |
1020 |
0.1432 |
- |
| 0.7569 |
1030 |
0.1234 |
- |
| 0.7643 |
1040 |
0.1583 |
- |
| 0.7716 |
1050 |
0.1274 |
- |
| 0.7790 |
1060 |
0.1314 |
- |
| 0.7863 |
1070 |
0.1163 |
- |
| 0.7937 |
1080 |
0.1512 |
- |
| 0.8010 |
1090 |
0.1392 |
- |
| 0.8084 |
1100 |
0.1401 |
0.0638 |
| 0.8157 |
1110 |
0.1366 |
- |
| 0.8231 |
1120 |
0.1471 |
- |
| 0.8304 |
1130 |
0.1341 |
- |
| 0.8378 |
1140 |
0.1495 |
- |
| 0.8451 |
1150 |
0.1297 |
- |
| 0.8525 |
1160 |
0.146 |
- |
| 0.8598 |
1170 |
0.1431 |
- |
| 0.8672 |
1180 |
0.1487 |
- |
| 0.8745 |
1190 |
0.1291 |
- |
| 0.8819 |
1200 |
0.1225 |
0.0631 |
| 0.8892 |
1210 |
0.1291 |
- |
| 0.8966 |
1220 |
0.1232 |
- |
| 0.9039 |
1230 |
0.1187 |
- |
| 0.9113 |
1240 |
0.1662 |
- |
| 0.9186 |
1250 |
0.1395 |
- |
| 0.9260 |
1260 |
0.1308 |
- |
| 0.9333 |
1270 |
0.1493 |
- |
| 0.9407 |
1280 |
0.1186 |
- |
| 0.9480 |
1290 |
0.1318 |
- |
| 0.9554 |
1300 |
0.1364 |
0.0630 |
| 0.9627 |
1310 |
0.1356 |
- |
| 0.9701 |
1320 |
0.1458 |
- |
| 0.9774 |
1330 |
0.1591 |
- |
| 0.9848 |
1340 |
0.1272 |
- |
| 0.9921 |
1350 |
0.1166 |
- |
| 0.9994 |
1360 |
0.1259 |
- |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.1
- Transformers: 4.57.6
- PyTorch: 2.10.0+cu128
- Accelerate: 1.12.0
- Datasets: 4.3.0
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}