0908a / README.md
blemond's picture
Upload folder using huggingface_hub
d5980c8 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:19380
  - loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-large
widget:
  - source_sentence: 'query: ASC X.12 는 뭔가요?'
    sentences:
      - 'passage: Accredited Standard Committee X.12'
      - 'passage: BCP Measurement and statistics Handling'
      - 'passage: Bearer Inter Working Function'
  - source_sentence: 'query: BECN 뜻 설명해줘.'
    sentences:
      - 'passage: AU Physical Control Block'
      - 'passage: Backward Explicit Congestion Notification'
      - 'passage: Beginning-Of-Tape Marker'
  - source_sentence: 'query: BMD 뜻 설명해줘.'
    sentences:
      - 'passage: 5th Generation Computer'
      - 'passage: Billing Mediation Device'
      - 'passage: 3 Dimensional Television'
  - source_sentence: 'query: 5GL 는 뭔가요?'
    sentences:
      - 'passage: Antenna Front-end Combiner Unit'
      - 'passage: Authentication Center'
      - 'passage: 5th Generation programming Language'
  - source_sentence: 'query: 무슨 뜻이야 BCHB?'
    sentences:
      - 'passage: Assisted-Global Navigation Satellite System'
      - 'passage: BCP Configuration Handler Block'
      - 'passage: ATM Link Processor'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on intfloat/multilingual-e5-large
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: e5 eval real
          type: e5-eval-real
        metrics:
          - type: cosine_accuracy@1
            value: 0.8415
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9715
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.985
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.994
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.8415
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3238333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19700000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09940000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.8415
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9715
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.985
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.994
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9288608308614111
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9068103174603175
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9070514070699416
            name: Cosine Map@100

SentenceTransformer based on intfloat/multilingual-e5-large

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large on the train dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-large
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • train

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'query: 무슨 뜻이야 BCHB?',
    'passage: BCP Configuration Handler Block',
    'passage: Assisted-Global Navigation Satellite System',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.8122, 0.0080],
#         [0.8122, 1.0000, 0.0858],
#         [0.0080, 0.0858, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8415
cosine_accuracy@3 0.9715
cosine_accuracy@5 0.985
cosine_accuracy@10 0.994
cosine_precision@1 0.8415
cosine_precision@3 0.3238
cosine_precision@5 0.197
cosine_precision@10 0.0994
cosine_recall@1 0.8415
cosine_recall@3 0.9715
cosine_recall@5 0.985
cosine_recall@10 0.994
cosine_ndcg@10 0.9289
cosine_mrr@10 0.9068
cosine_map@100 0.9071

Training Details

Training Dataset

train

  • Dataset: train
  • Size: 19,380 training samples
  • Columns: 0 and 1
  • Approximate statistics based on the first 1000 samples:
    0 1
    type string string
    details
    • min: 8 tokens
    • mean: 12.34 tokens
    • max: 18 tokens
    • min: 5 tokens
    • mean: 10.4 tokens
    • max: 30 tokens
  • Samples:
    0 1
    query: (e)CSFB 알려줘 passage: (enhanced) Circuit Switched Fallback
    query: 1000 BASE 알려줘 passage: 1000 Base Standard
    query: 1080i 뜻 설명해줘. passage: 1080 interlace scan
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss e5-eval-real_cosine_ndcg@10
0.0033 1 3.477 -
0.3300 100 1.2356 0.8716
0.6601 200 0.0998 0.9050
0.9901 300 0.0692 0.9154
1.3201 400 0.0552 0.9156
1.6502 500 0.0366 0.9228
1.9802 600 0.0316 0.9267
2.3102 700 0.0269 0.9281
2.6403 800 0.0206 0.9294
2.9703 900 0.0208 0.9286
-1 -1 - 0.9289

Framework Versions

  • Python: 3.12.11
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.0
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.10.1
  • Datasets: 3.6.0
  • Tokenizers: 0.22.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}