model-b-structured / README.md
radoslavralev's picture
Add new SentenceTransformer model
4e52484 verified
|
raw
history blame
24.5 kB
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:713743
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
    sentences:
      - 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
      - What does the Gettysburg Address really mean?
      - What is eatalo.com?
  - source_sentence: >-
      Has the influence of Ancient Carthage in science, math, and society been
      underestimated?
    sentences:
      - How does one earn money online without an investment from home?
      - >-
        Has the influence of Ancient Carthage in science, math, and society been
        underestimated?
      - >-
        Has the influence of the Ancient Etruscans in science and math been
        underestimated?
  - source_sentence: >-
      Is there any app that shares charging to others like share it how we
      transfer files?
    sentences:
      - >-
        How do you think of Chinese claims that the present Private Arbitration
        is illegal, its verdict violates the UNCLOS and is illegal?
      - >-
        Is there any app that shares charging to others like share it how we
        transfer files?
      - >-
        Are there any platforms that provides end-to-end encryption for file
        transfer/ sharing?
  - source_sentence: Why AAP’s MLA Dinesh Mohaniya has been arrested?
    sentences:
      - What are your views on the latest sex scandal by AAP MLA Sandeep Kumar?
      - What is a dc current? What are some examples?
      - Why AAP’s MLA Dinesh Mohaniya has been arrested?
  - source_sentence: What is the difference between economic growth and economic development?
    sentences:
      - >-
        How cold can the Gobi Desert get, and how do its average temperatures
        compare to the ones in the Simpson Desert?
      - the difference between economic growth and economic development is What?
      - What is the difference between economic growth and economic development?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_ndcg@10
  - cosine_mrr@1
  - cosine_mrr@5
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: val
          type: val
        metrics:
          - type: cosine_accuracy@1
            value: 0.828275
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.90535
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.930675
            name: Cosine Accuracy@5
          - type: cosine_precision@1
            value: 0.828275
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3017833333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.186135
            name: Cosine Precision@5
          - type: cosine_recall@1
            value: 0.828275
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.90535
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.930675
            name: Cosine Recall@5
          - type: cosine_ndcg@10
            value: 0.8940991092644636
            name: Cosine Ndcg@10
          - type: cosine_mrr@1
            value: 0.828275
            name: Cosine Mrr@1
          - type: cosine_mrr@5
            value: 0.8685570833333288
            name: Cosine Mrr@5
          - type: cosine_mrr@10
            value: 0.8726829662698361
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.8748315667834753
            name: Cosine Map@100

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("redis/model-b-structured")
# Run inference
sentences = [
    'What is the difference between economic growth and economic development?',
    'What is the difference between economic growth and economic development?',
    'the difference between economic growth and economic development is What?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 0.9999,  0.9999, -0.0738],
#         [ 0.9999,  0.9999, -0.0738],
#         [-0.0738, -0.0738,  1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8283
cosine_accuracy@3 0.9053
cosine_accuracy@5 0.9307
cosine_precision@1 0.8283
cosine_precision@3 0.3018
cosine_precision@5 0.1861
cosine_recall@1 0.8283
cosine_recall@3 0.9053
cosine_recall@5 0.9307
cosine_ndcg@10 0.8941
cosine_mrr@1 0.8283
cosine_mrr@5 0.8686
cosine_mrr@10 0.8727
cosine_map@100 0.8748

Training Details

Training Dataset

Unnamed Dataset

  • Size: 713,743 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.07 tokens
    • max: 53 tokens
    • min: 6 tokens
    • mean: 16.03 tokens
    • max: 53 tokens
    • min: 6 tokens
    • mean: 16.81 tokens
    • max: 58 tokens
  • Samples:
    anchor positive negative
    Which one is better Linux OS? Ubuntu or Mint? Why do you use Linux Mint? Which one is not better Linux OS ? Ubuntu or Mint ?
    What is flow? What is flow? What are flow lines?
    How is Trump planning to get Mexico to pay for his supposed wall? How is it possible for Donald Trump to force Mexico to pay for the wall? Why do we connect the positive terminal before the negative terminal to ground in a vehicle battery?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 7.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 40,000 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 15.52 tokens
    • max: 74 tokens
    • min: 6 tokens
    • mean: 15.51 tokens
    • max: 74 tokens
    • min: 6 tokens
    • mean: 16.79 tokens
    • max: 69 tokens
  • Samples:
    anchor positive negative
    Why are all my questions on Quora marked needing improvement? Why are all my questions immediately being marked as needing improvement? For a post-graduate student in IIT, is it allowed to take an external scholarship as a top-up to his/her MHRD assistantship?
    Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic? Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic? Can blue butter fly needle with vaccum tube be reused not ? Is it HIV risk ? . Heard the needle is too small to be reused . Had blood draw at clinic ?
    Why do people still believe the world is flat? Why are there still people who believe the world is flat? I'm not able to buy Udemy course .it is not accepting mine and my friends debit card.my card can be used for Flipkart .how to purchase now?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 7.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • learning_rate: 2e-05
  • weight_decay: 0.0001
  • max_steps: 12000
  • warmup_ratio: 0.1
  • fp16: True
  • dataloader_drop_last: True
  • dataloader_num_workers: 1
  • dataloader_prefetch_factor: 1
  • load_best_model_at_end: True
  • optim: adamw_torch
  • ddp_find_unused_parameters: False
  • push_to_hub: True
  • hub_model_id: redis/model-b-structured
  • eval_on_start: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0001
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3.0
  • max_steps: 12000
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 1
  • dataloader_prefetch_factor: 1
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: False
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: redis/model-b-structured
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: True
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss val_cosine_ndcg@10
0 0 - 1.0340 0.8556
0.0897 250 1.1083 0.7666 0.8800
0.1793 500 0.9078 0.6773 0.8870
0.2690 750 0.8464 0.6531 0.8879
0.3587 1000 0.8142 0.6386 0.8886
0.4484 1250 0.7882 0.6274 0.8891
0.5380 1500 0.769 0.6149 0.8896
0.6277 1750 0.7567 0.6090 0.8909
0.7174 2000 0.7444 0.6039 0.8906
0.8070 2250 0.736 0.5974 0.8911
0.8967 2500 0.7283 0.5959 0.8909
0.9864 2750 0.723 0.5911 0.8913
1.0760 3000 0.7136 0.5871 0.8915
1.1657 3250 0.7073 0.5838 0.8912
1.2554 3500 0.7023 0.5825 0.8915
1.3451 3750 0.6988 0.5794 0.8920
1.4347 4000 0.6956 0.5782 0.8920
1.5244 4250 0.692 0.5758 0.8925
1.6141 4500 0.6867 0.5739 0.8925
1.7037 4750 0.6848 0.5734 0.8923
1.7934 5000 0.6828 0.5709 0.8926
1.8831 5250 0.6816 0.5702 0.8925
1.9727 5500 0.6778 0.5681 0.8928
2.0624 5750 0.6731 0.5669 0.8930
2.1521 6000 0.6704 0.5661 0.8931
2.2418 6250 0.6699 0.5653 0.8931
2.3314 6500 0.6679 0.5640 0.8932
2.4211 6750 0.6657 0.5627 0.8933
2.5108 7000 0.6648 0.5624 0.8931
2.6004 7250 0.6605 0.5608 0.8932
2.6901 7500 0.6623 0.5609 0.8934
2.7798 7750 0.6605 0.5592 0.8936
2.8694 8000 0.6605 0.5586 0.8938
2.9591 8250 0.6578 0.5576 0.8936
3.0488 8500 0.6565 0.5572 0.8938
3.1385 8750 0.6542 0.5566 0.8938
3.2281 9000 0.6541 0.5556 0.8939
3.3178 9250 0.6535 0.5555 0.8940
3.4075 9500 0.653 0.5548 0.8941
3.4971 9750 0.6531 0.5543 0.8941
3.5868 10000 0.6498 0.5543 0.8940
3.6765 10250 0.6491 0.5539 0.8940
3.7661 10500 0.6492 0.5541 0.8940
3.8558 10750 0.6504 0.5533 0.8940
3.9455 11000 0.6505 0.5535 0.8943
4.0352 11250 0.6489 0.5532 0.8942
4.1248 11500 0.6459 0.5530 0.8943
4.2145 11750 0.6469 0.5529 0.8941
4.3042 12000 0.6483 0.5529 0.8941

Framework Versions

  • Python: 3.10.18
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.2
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}