| | --- |
| | tags: |
| | - sentence-transformers |
| | - sentence-similarity |
| | - feature-extraction |
| | - dense |
| | - generated_from_trainer |
| | - dataset_size:76932 |
| | - loss:MultipleNegativesRankingLoss |
| | base_model: intfloat/multilingual-e5-large |
| | widget: |
| | - source_sentence: 'query: ATM Adaptation Layer 2의 약어는 무엇인가요?' |
| | sentences: |
| | - 'passage: 2 Transmit 2 Receive (기술)' |
| | - 'passage: Alternating Current (개념)' |
| | - 'passage: AAL2 (기술)' |
| | - source_sentence: 'query: AC의 접근 클래스 C0부터 C15까지의 기능은 무엇인가요?' |
| | sentences: |
| | - 'passage: Access Class (C0 to C15) (개념)' |
| | - 'passage: 3 Dimension-Through Silicon Via (기술)' |
| | - 'passage: ACAP (Conceptual)' |
| | - source_sentence: 'query: What is the abbreviation for Alarm Agent Handling Block?' |
| | sentences: |
| | - 'passage: ATM Connection establishment/release Control Block (기술)' |
| | - 'passage: AAGHB (Technical)' |
| | - 'passage: Account Card Calling (활용)' |
| | - source_sentence: 'query: ABPL의 ATM 기본 속도 물리 계층 장치는 어떻게 구성되어 있나요?' |
| | sentences: |
| | - 'passage: ATM Base Rate Physical Layer Unit (기술)' |
| | - 'passage: 3A (개념)' |
| | - 'passage: 5GTF (Conceptual)' |
| | - source_sentence: 'query: How does the triple encryption process of 3-DES enhance |
| | security?' |
| | sentences: |
| | - 'passage: 5th Generation Technical Forum (Conceptual)' |
| | - 'passage: Triple Data Encryption Standard (Technical)' |
| | - 'passage: ABCDEF (활용)' |
| | pipeline_tag: sentence-similarity |
| | library_name: sentence-transformers |
| | metrics: |
| | - cosine_accuracy@1 |
| | - cosine_accuracy@3 |
| | - cosine_accuracy@5 |
| | - cosine_accuracy@10 |
| | - cosine_precision@1 |
| | - cosine_precision@3 |
| | - cosine_precision@5 |
| | - cosine_precision@10 |
| | - cosine_recall@1 |
| | - cosine_recall@3 |
| | - cosine_recall@5 |
| | - cosine_recall@10 |
| | - cosine_ndcg@10 |
| | - cosine_mrr@10 |
| | - cosine_map@100 |
| | model-index: |
| | - name: SentenceTransformer based on intfloat/multilingual-e5-large |
| | results: |
| | - task: |
| | type: information-retrieval |
| | name: Information Retrieval |
| | dataset: |
| | name: e5 eval real |
| | type: e5-eval-real |
| | metrics: |
| | - type: cosine_accuracy@1 |
| | value: 0.8686666666666667 |
| | name: Cosine Accuracy@1 |
| | - type: cosine_accuracy@3 |
| | value: 0.969 |
| | name: Cosine Accuracy@3 |
| | - type: cosine_accuracy@5 |
| | value: 0.9832 |
| | name: Cosine Accuracy@5 |
| | - type: cosine_accuracy@10 |
| | value: 0.9922 |
| | name: Cosine Accuracy@10 |
| | - type: cosine_precision@1 |
| | value: 0.8686666666666667 |
| | name: Cosine Precision@1 |
| | - type: cosine_precision@3 |
| | value: 0.323 |
| | name: Cosine Precision@3 |
| | - type: cosine_precision@5 |
| | value: 0.19664000000000004 |
| | name: Cosine Precision@5 |
| | - type: cosine_precision@10 |
| | value: 0.09922000000000002 |
| | name: Cosine Precision@10 |
| | - type: cosine_recall@1 |
| | value: 0.8686666666666667 |
| | name: Cosine Recall@1 |
| | - type: cosine_recall@3 |
| | value: 0.969 |
| | name: Cosine Recall@3 |
| | - type: cosine_recall@5 |
| | value: 0.9832 |
| | name: Cosine Recall@5 |
| | - type: cosine_recall@10 |
| | value: 0.9922 |
| | name: Cosine Recall@10 |
| | - type: cosine_ndcg@10 |
| | value: 0.9376619313817377 |
| | name: Cosine Ndcg@10 |
| | - type: cosine_mrr@10 |
| | value: 0.9193550000000039 |
| | name: Cosine Mrr@10 |
| | - type: cosine_map@100 |
| | value: 0.9197550584627825 |
| | name: Cosine Map@100 |
| | --- |
| | |
| | # SentenceTransformer based on intfloat/multilingual-e5-large |
| |
|
| | This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the train dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| | - **Model Type:** Sentence Transformer |
| | - **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision 0dc5580a448e4284468b8909bae50fa925907bc5 --> |
| | - **Maximum Sequence Length:** 256 tokens |
| | - **Output Dimensionality:** 1024 dimensions |
| | - **Similarity Function:** Cosine Similarity |
| | - **Training Dataset:** |
| | - train |
| | <!-- - **Language:** Unknown --> |
| | <!-- - **License:** Unknown --> |
| |
|
| | ### Model Sources |
| |
|
| | - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| | - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
| | - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
| |
|
| | ### Full Model Architecture |
| |
|
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'}) |
| | (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
| | (2): Normalize() |
| | ) |
| | ``` |
| |
|
| | ## Usage |
| |
|
| | ### Direct Usage (Sentence Transformers) |
| |
|
| | First install the Sentence Transformers library: |
| |
|
| | ```bash |
| | pip install -U sentence-transformers |
| | ``` |
| |
|
| | Then you can load this model and run inference. |
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | |
| | # Download from the 🤗 Hub |
| | model = SentenceTransformer("sentence_transformers_model_id") |
| | # Run inference |
| | sentences = [ |
| | 'query: How does the triple encryption process of 3-DES enhance security?', |
| | 'passage: Triple Data Encryption Standard (Technical)', |
| | 'passage: ABCDEF (활용)', |
| | ] |
| | embeddings = model.encode(sentences) |
| | print(embeddings.shape) |
| | # [3, 1024] |
| | |
| | # Get the similarity scores for the embeddings |
| | similarities = model.similarity(embeddings, embeddings) |
| | print(similarities) |
| | # tensor([[1.0000, 0.8389, 0.1546], |
| | # [0.8389, 1.0000, 0.0850], |
| | # [0.1546, 0.0850, 1.0000]]) |
| | ``` |
| |
|
| | <!-- |
| | ### Direct Usage (Transformers) |
| |
|
| | <details><summary>Click to see the direct usage in Transformers</summary> |
| |
|
| | </details> |
| | --> |
| |
|
| | <!-- |
| | ### Downstream Usage (Sentence Transformers) |
| |
|
| | You can finetune this model on your own dataset. |
| |
|
| | <details><summary>Click to expand</summary> |
| |
|
| | </details> |
| | --> |
| |
|
| | <!-- |
| | ### Out-of-Scope Use |
| |
|
| | *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| | --> |
| |
|
| | ## Evaluation |
| |
|
| | ### Metrics |
| |
|
| | #### Information Retrieval |
| |
|
| | * Dataset: `e5-eval-real` |
| | * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) |
| |
|
| | | Metric | Value | |
| | |:--------------------|:-----------| |
| | | cosine_accuracy@1 | 0.8687 | |
| | | cosine_accuracy@3 | 0.969 | |
| | | cosine_accuracy@5 | 0.9832 | |
| | | cosine_accuracy@10 | 0.9922 | |
| | | cosine_precision@1 | 0.8687 | |
| | | cosine_precision@3 | 0.323 | |
| | | cosine_precision@5 | 0.1966 | |
| | | cosine_precision@10 | 0.0992 | |
| | | cosine_recall@1 | 0.8687 | |
| | | cosine_recall@3 | 0.969 | |
| | | cosine_recall@5 | 0.9832 | |
| | | cosine_recall@10 | 0.9922 | |
| | | **cosine_ndcg@10** | **0.9377** | |
| | | cosine_mrr@10 | 0.9194 | |
| | | cosine_map@100 | 0.9198 | |
| | |
| | <!-- |
| | ## Bias, Risks and Limitations |
| | |
| | *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| | --> |
| | |
| | <!-- |
| | ### Recommendations |
| | |
| | *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| | --> |
| | |
| | ## Training Details |
| | |
| | ### Training Dataset |
| | |
| | #### train |
| | |
| | * Dataset: train |
| | * Size: 76,932 training samples |
| | * Columns: <code>0</code> and <code>1</code> |
| | * Approximate statistics based on the first 1000 samples: |
| | | | 0 | 1 | |
| | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| |
| | | type | string | string | |
| | | details | <ul><li>min: 11 tokens</li><li>mean: 19.44 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 12.28 tokens</li><li>max: 27 tokens</li></ul> | |
| | * Samples: |
| | | 0 | 1 | |
| | |:--------------------------------------------------------------------|:------------------------------------------------------------------| |
| | | <code>query: 3D-TSV 기술의 구조는 어떻게 되어 있나요?</code> | <code>passage: 3 Dimension-Through Silicon Via (기술)</code> | |
| | | <code>query: What is the structure of the 3D-TSV technology?</code> | <code>passage: 3 Dimension-Through Silicon Via (Technical)</code> | |
| | | <code>query: 3 Dimension-Through Silicon Via의 줄임말이 뭐죠?</code> | <code>passage: 3D-TSV (기술)</code> | |
| | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: |
| | ```json |
| | { |
| | "scale": 20.0, |
| | "similarity_fct": "cos_sim", |
| | "gather_across_devices": false |
| | } |
| | ``` |
| | |
| | ### Training Hyperparameters |
| | #### Non-Default Hyperparameters |
| | |
| | - `eval_strategy`: steps |
| | - `per_device_train_batch_size`: 64 |
| | - `per_device_eval_batch_size`: 64 |
| | - `learning_rate`: 1e-05 |
| | - `weight_decay`: 0.01 |
| | - `lr_scheduler_type`: cosine |
| | - `warmup_ratio`: 0.1 |
| | - `bf16`: True |
| | - `batch_sampler`: no_duplicates |
| | |
| | #### All Hyperparameters |
| | <details><summary>Click to expand</summary> |
| | |
| | - `overwrite_output_dir`: False |
| | - `do_predict`: False |
| | - `eval_strategy`: steps |
| | - `prediction_loss_only`: True |
| | - `per_device_train_batch_size`: 64 |
| | - `per_device_eval_batch_size`: 64 |
| | - `per_gpu_train_batch_size`: None |
| | - `per_gpu_eval_batch_size`: None |
| | - `gradient_accumulation_steps`: 1 |
| | - `eval_accumulation_steps`: None |
| | - `torch_empty_cache_steps`: None |
| | - `learning_rate`: 1e-05 |
| | - `weight_decay`: 0.01 |
| | - `adam_beta1`: 0.9 |
| | - `adam_beta2`: 0.999 |
| | - `adam_epsilon`: 1e-08 |
| | - `max_grad_norm`: 1.0 |
| | - `num_train_epochs`: 3 |
| | - `max_steps`: -1 |
| | - `lr_scheduler_type`: cosine |
| | - `lr_scheduler_kwargs`: {} |
| | - `warmup_ratio`: 0.1 |
| | - `warmup_steps`: 0 |
| | - `log_level`: passive |
| | - `log_level_replica`: warning |
| | - `log_on_each_node`: True |
| | - `logging_nan_inf_filter`: True |
| | - `save_safetensors`: True |
| | - `save_on_each_node`: False |
| | - `save_only_model`: False |
| | - `restore_callback_states_from_checkpoint`: False |
| | - `no_cuda`: False |
| | - `use_cpu`: False |
| | - `use_mps_device`: False |
| | - `seed`: 42 |
| | - `data_seed`: None |
| | - `jit_mode_eval`: False |
| | - `use_ipex`: False |
| | - `bf16`: True |
| | - `fp16`: False |
| | - `fp16_opt_level`: O1 |
| | - `half_precision_backend`: auto |
| | - `bf16_full_eval`: False |
| | - `fp16_full_eval`: False |
| | - `tf32`: None |
| | - `local_rank`: 0 |
| | - `ddp_backend`: None |
| | - `tpu_num_cores`: None |
| | - `tpu_metrics_debug`: False |
| | - `debug`: [] |
| | - `dataloader_drop_last`: False |
| | - `dataloader_num_workers`: 0 |
| | - `dataloader_prefetch_factor`: None |
| | - `past_index`: -1 |
| | - `disable_tqdm`: False |
| | - `remove_unused_columns`: True |
| | - `label_names`: None |
| | - `load_best_model_at_end`: False |
| | - `ignore_data_skip`: False |
| | - `fsdp`: [] |
| | - `fsdp_min_num_params`: 0 |
| | - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
| | - `fsdp_transformer_layer_cls_to_wrap`: None |
| | - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
| | - `parallelism_config`: None |
| | - `deepspeed`: None |
| | - `label_smoothing_factor`: 0.0 |
| | - `optim`: adamw_torch_fused |
| | - `optim_args`: None |
| | - `adafactor`: False |
| | - `group_by_length`: False |
| | - `length_column_name`: length |
| | - `ddp_find_unused_parameters`: None |
| | - `ddp_bucket_cap_mb`: None |
| | - `ddp_broadcast_buffers`: False |
| | - `dataloader_pin_memory`: True |
| | - `dataloader_persistent_workers`: False |
| | - `skip_memory_metrics`: True |
| | - `use_legacy_prediction_loop`: False |
| | - `push_to_hub`: False |
| | - `resume_from_checkpoint`: None |
| | - `hub_model_id`: None |
| | - `hub_strategy`: every_save |
| | - `hub_private_repo`: None |
| | - `hub_always_push`: False |
| | - `hub_revision`: None |
| | - `gradient_checkpointing`: False |
| | - `gradient_checkpointing_kwargs`: None |
| | - `include_inputs_for_metrics`: False |
| | - `include_for_metrics`: [] |
| | - `eval_do_concat_batches`: True |
| | - `fp16_backend`: auto |
| | - `push_to_hub_model_id`: None |
| | - `push_to_hub_organization`: None |
| | - `mp_parameters`: |
| | - `auto_find_batch_size`: False |
| | - `full_determinism`: False |
| | - `torchdynamo`: None |
| | - `ray_scope`: last |
| | - `ddp_timeout`: 1800 |
| | - `torch_compile`: False |
| | - `torch_compile_backend`: None |
| | - `torch_compile_mode`: None |
| | - `include_tokens_per_second`: False |
| | - `include_num_input_tokens_seen`: False |
| | - `neftune_noise_alpha`: None |
| | - `optim_target_modules`: None |
| | - `batch_eval_metrics`: False |
| | - `eval_on_start`: False |
| | - `use_liger_kernel`: False |
| | - `liger_kernel_config`: None |
| | - `eval_use_gather_object`: False |
| | - `average_tokens_across_devices`: False |
| | - `prompts`: None |
| | - `batch_sampler`: no_duplicates |
| | - `multi_dataset_batch_sampler`: proportional |
| | - `router_mapping`: {} |
| | - `learning_rate_mapping`: {} |
| | |
| | </details> |
| | |
| | ### Training Logs |
| | | Epoch | Step | Training Loss | e5-eval-real_cosine_ndcg@10 | |
| | |:------:|:----:|:-------------:|:---------------------------:| |
| | | 0.0008 | 1 | 3.1575 | - | |
| | | 0.0831 | 100 | 1.6593 | - | |
| | | 0.1663 | 200 | 0.1298 | 0.8389 | |
| | | 0.2494 | 300 | 0.0848 | - | |
| | | 0.3325 | 400 | 0.0716 | 0.8808 | |
| | | 0.4156 | 500 | 0.0504 | - | |
| | | 0.4988 | 600 | 0.0421 | 0.9033 | |
| | | 0.5819 | 700 | 0.042 | - | |
| | | 0.6650 | 800 | 0.0398 | 0.9095 | |
| | | 0.7481 | 900 | 0.0384 | - | |
| | | 0.8313 | 1000 | 0.0383 | 0.9111 | |
| | | 0.9144 | 1100 | 0.0321 | - | |
| | | 0.9975 | 1200 | 0.0317 | 0.9186 | |
| | | 1.0806 | 1300 | 0.0299 | - | |
| | | 1.1638 | 1400 | 0.0302 | 0.9161 | |
| | | 1.2469 | 1500 | 0.025 | - | |
| | | 1.3300 | 1600 | 0.0199 | 0.9261 | |
| | | 1.4131 | 1700 | 0.0179 | - | |
| | | 1.4963 | 1800 | 0.0117 | 0.9305 | |
| | | 1.5794 | 1900 | 0.013 | - | |
| | | 1.6625 | 2000 | 0.012 | 0.9308 | |
| | | 1.7456 | 2100 | 0.0137 | - | |
| | | 1.8288 | 2200 | 0.0141 | 0.9309 | |
| | | 1.9119 | 2300 | 0.0127 | - | |
| | | 1.9950 | 2400 | 0.0115 | 0.9332 | |
| | | 2.0781 | 2500 | 0.0114 | - | |
| | | 2.1613 | 2600 | 0.011 | 0.9351 | |
| | | 2.2444 | 2700 | 0.0107 | - | |
| | | 2.3275 | 2800 | 0.0087 | 0.9357 | |
| | | 2.4106 | 2900 | 0.0084 | - | |
| | | 2.4938 | 3000 | 0.0059 | 0.9366 | |
| | | 2.5769 | 3100 | 0.0062 | - | |
| | | 2.6600 | 3200 | 0.0071 | 0.9377 | |
| | | 2.7431 | 3300 | 0.0072 | - | |
| | | 2.8263 | 3400 | 0.0079 | 0.9376 | |
| | | 2.9094 | 3500 | 0.0071 | - | |
| | | 2.9925 | 3600 | 0.0068 | 0.9376 | |
| | | -1 | -1 | - | 0.9377 | |
| | |
| | |
| | ### Framework Versions |
| | - Python: 3.12.11 |
| | - Sentence Transformers: 5.1.0 |
| | - Transformers: 4.56.1 |
| | - PyTorch: 2.8.0+cu126 |
| | - Accelerate: 1.10.1 |
| | - Datasets: 3.6.0 |
| | - Tokenizers: 0.22.0 |
| | |
| | ## Citation |
| | |
| | ### BibTeX |
| | |
| | #### Sentence Transformers |
| | ```bibtex |
| | @inproceedings{reimers-2019-sentence-bert, |
| | title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
| | author = "Reimers, Nils and Gurevych, Iryna", |
| | booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
| | month = "11", |
| | year = "2019", |
| | publisher = "Association for Computational Linguistics", |
| | url = "https://arxiv.org/abs/1908.10084", |
| | } |
| | ``` |
| | |
| | #### MultipleNegativesRankingLoss |
| | ```bibtex |
| | @misc{henderson2017efficient, |
| | title={Efficient Natural Language Response Suggestion for Smart Reply}, |
| | author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, |
| | year={2017}, |
| | eprint={1705.00652}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL} |
| | } |
| | ``` |
| | |
| | <!-- |
| | ## Glossary |
| | |
| | *Clearly define terms in order to be accessible across audiences.* |
| | --> |
| | |
| | <!-- |
| | ## Model Card Authors |
| | |
| | *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| | --> |
| | |
| | <!-- |
| | ## Model Card Contact |
| | |
| | *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| | --> |