thanhpham1's picture
Add new SentenceTransformer model
47f2116 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:5146
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
  - source_sentence: >-
      import subprocess

      zen_of_python = subprocess.check_output(["python", "-c", "import this"])

      corpus = zen_of_python.split()


      num_partitions = 3

      chunk = len(corpus) // num_partitions

      partitions = [

      corpus[i * chunk: (i + 1) * chunk] for i in range(num_partitions)

      ]


      Mapping Data#

      To determine the map phase, we require a map function to use on each
      document.

      The output is the pair (word, 1) for every word found in a document.

      For basic text documents we load as Python strings, the process is as
      follows:


      def map_function(document):

      for word in document.lower().split():

      yield word, 1


      We use the apply_map function on a large collection of documents by
      marking it as a task in Ray using the @ray.remote decorator.

      When we call apply_map, we apply it to three sets of document data
      (num_partitions=3).

      The apply_map function returns three lists, one for each partition so that
      Ray can rearrange the results of the map phase and distribute them to the
      appropriate nodes.


      import ray
    sentences:
      - What does the map_function yield for each word in a document?
      - >-
        What does PBT do differently from traditional hyperparameter tuning
        methods?
      - >-
        What is returned by task_with_static_multiple_returns_good in the Actor
        class?
  - source_sentence: >-
      192.168.0.15 7241 Worker
      ffffffffffffffffffffffffffffffffffffffff0100000001000000 10 MiB
      PINNED_IN_MEMORY (deserialize task arg)

      __main__.f


      192.168.0.15 7207 Driver
      ffffffffffffffffffffffffffffffffffffffff0100000001000000 15 MiB
      USED_BY_PENDING_TASK (put object)

      test.py:

      <module>:28


      While the task is running, we see that ray memory shows both a
      LOCAL_REFERENCE and a USED_BY_PENDING_TASK reference for the object in the
      driver process. The worker process also holds a reference to the object
      because the Python arg is directly referencing the memory in the plasma,
      so it can’t be evicted; therefore it is PINNED_IN_MEMORY.

      4. Serialized ObjectRef references

      @ray.remote

      def f(arg):

      while True:

      pass


      a = ray.put(None)

      b = f.remote([a])
    sentences:
      - How can a dataset be created from in-memory data?
      - What does Algorithm.training_step return for the new API stack?
      - >-
        Why can't the object be evicted while the worker process holds a
        reference?
  - source_sentence: >-
      For distributed systems engineers, Ray automatically handles key
      processes:


      Orchestration–Managing the various components of a distributed system.

      Scheduling–Coordinating when and where tasks are executed.

      Fault tolerance–Ensuring tasks complete regardless of inevitable points of
      failure.

      Auto-scaling–Adjusting the number of resources allocated to dynamic
      demand.



      What you can do with Ray#

      These are some common ML workloads that individuals, organizations, and
      companies leverage Ray to build their AI applications:


      Batch inference on CPUs and GPUs

      Model serving

      Distributed training of large models

      Parallel hyperparameter tuning experiments

      Reinforcement learning

      ML platform




      Ray framework#







      Stack of Ray libraries - unified toolkit for ML workloads.




      Ray’s unified compute framework consists of three layers:
    sentences:
      - What does remote_worker_envs control when num_envs_per_env_runner > 1?
      - How is the learning rate set in the config?
      - >-
        According to the excerpt, what does Ray automatically handle for
        distributed systems engineers?
  - source_sentence: >-
      RLlib component tree#

      The following is the structure of the RLlib component tree, showing under
      which name you can

      access a subcomponent’s own checkpoint within the higher-level checkpoint.
      At the highest level

      is the Algorithm class:

      algorithm/

      learner_group/

      learner/

      rl_module/

      default_policy/ # <- single-agent case

      [module ID 1]/ # <- multi-agent case

      [module ID 2]/ # ...

      env_runner/

      env_to_module_connector/

      module_to_env_connector/


      Note

      The env_runner/ subcomponent currently doesn’t hold a copy of the RLModule

      checkpoint because it’s already saved under learner/. The Ray team is
      working on resolving

      this issue, probably through soft-linking to avoid duplicate files and
      unnecessary disk usage.


      Creating instances from a checkpoint with from_checkpoint#

      Once you have a checkpoint of either a trained Algorithm or

      any of its subcomponents, you can recreate new objects directly

      from this checkpoint.

      The following are two examples:
    sentences:
      - Why does RLlib convert each row into a single-step episode by default?
      - What is at the highest level of the RLlib component tree?
      - >-
        What is recommended regarding AOF when using storage options that do not
        support append operations?
  - source_sentence: >-
      Option 2: Manually Create URL (slower to implement, but recommended for
      production environments)#

      The second option is to manually create this URL by pattern-matching your
      specific use case with one of the following examples.

      This is recommended because it provides finer-grained control over which
      repository branch and commit to use when generating your dependency zip
      file.

      These options prevent consistency issues on Ray Clusters (see the warning
      above for more info).

      To create the URL, pick a URL template below that fits your use case, and
      fill in all parameters in brackets (e.g. [username], [repository], etc.)
      with the specific values from your repository.

      For instance, suppose your GitHub username is example_user, the
      repository’s name is example_repository, and the desired commit hash is
      abcdefg.

      If example_repository is public and you want to retrieve the abcdefg
      commit (which matches the first example use case), the URL would be:
    sentences:
      - What can Ray Train and Ray Tune be used together for?
      - How do you create the URL for Option 2?
      - >-
        Which function can you use to read a CSV file for batch processing in
        Ray?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: Fine-tune-all-mpnet-base-v2
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.5874125874125874
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.6818181818181818
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.7954545454545454
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8863636363636364
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.5874125874125874
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.5180652680652681
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.3944055944055945
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.23199300699300698
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.263986013986014
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.6073717948717948
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.7521853146853147
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8780594405594405
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7386606603331115
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6635614385614379
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6988731642119342
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.5734265734265734
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.666083916083916
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8006993006993007
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8811188811188811
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.5734265734265734
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.5052447552447552
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.39370629370629373
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.23094405594405593
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.26005244755244755
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.5914918414918414
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.7543706293706294
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8726689976689977
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7303335650898982
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.652235958485958
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.689387057080973
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.5664335664335665
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.666083916083916
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.7797202797202797
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8583916083916084
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.5664335664335665
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.5011655011655011
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.38636363636363635
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.22534965034965035
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.2577214452214452
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.5893065268065268
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.7354312354312353
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8487762237762237
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7167871578299232
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6432942057942053
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6823584299690649
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.5402097902097902
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.6398601398601399
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.743006993006993
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8304195804195804
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.5402097902097902
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.47960372960372966
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.3678321678321678
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.2181818181818182
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.24519230769230768
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.5623543123543123
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.701048951048951
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8228438228438228
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.6886328428362513
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6146582584082584
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6543671947827556
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.4353146853146853
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.5332167832167832
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.6311188811188811
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.7622377622377622
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.4353146853146853
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3945221445221445
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.3094405594405594
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.19825174825174827
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.19842657342657344
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.46547202797202797
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.5910547785547785
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.7467948717948718
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.5953015131317417
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.5138784826284825
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.559206100539383
            name: Cosine Map@100

Fine-tune-all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("thanhpham1/Fine-tune-all-mpnet-base-v2")
# Run inference
sentences = [
    'Option 2: Manually Create URL (slower to implement, but recommended for production environments)#\nThe second option is to manually create this URL by pattern-matching your specific use case with one of the following examples.\nThis is recommended because it provides finer-grained control over which repository branch and commit to use when generating your dependency zip file.\nThese options prevent consistency issues on Ray Clusters (see the warning above for more info).\nTo create the URL, pick a URL template below that fits your use case, and fill in all parameters in brackets (e.g. [username], [repository], etc.) with the specific values from your repository.\nFor instance, suppose your GitHub username is example_user, the repository’s name is example_repository, and the desired commit hash is abcdefg.\nIf example_repository is public and you want to retrieve the abcdefg commit (which matches the first example use case), the URL would be:',
    'How do you create the URL for Option 2?',
    'What can Ray Train and Ray Tune be used together for?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5874
cosine_accuracy@3 0.6818
cosine_accuracy@5 0.7955
cosine_accuracy@10 0.8864
cosine_precision@1 0.5874
cosine_precision@3 0.5181
cosine_precision@5 0.3944
cosine_precision@10 0.232
cosine_recall@1 0.264
cosine_recall@3 0.6074
cosine_recall@5 0.7522
cosine_recall@10 0.8781
cosine_ndcg@10 0.7387
cosine_mrr@10 0.6636
cosine_map@100 0.6989

Information Retrieval

Metric Value
cosine_accuracy@1 0.5734
cosine_accuracy@3 0.6661
cosine_accuracy@5 0.8007
cosine_accuracy@10 0.8811
cosine_precision@1 0.5734
cosine_precision@3 0.5052
cosine_precision@5 0.3937
cosine_precision@10 0.2309
cosine_recall@1 0.2601
cosine_recall@3 0.5915
cosine_recall@5 0.7544
cosine_recall@10 0.8727
cosine_ndcg@10 0.7303
cosine_mrr@10 0.6522
cosine_map@100 0.6894

Information Retrieval

Metric Value
cosine_accuracy@1 0.5664
cosine_accuracy@3 0.6661
cosine_accuracy@5 0.7797
cosine_accuracy@10 0.8584
cosine_precision@1 0.5664
cosine_precision@3 0.5012
cosine_precision@5 0.3864
cosine_precision@10 0.2253
cosine_recall@1 0.2577
cosine_recall@3 0.5893
cosine_recall@5 0.7354
cosine_recall@10 0.8488
cosine_ndcg@10 0.7168
cosine_mrr@10 0.6433
cosine_map@100 0.6824

Information Retrieval

Metric Value
cosine_accuracy@1 0.5402
cosine_accuracy@3 0.6399
cosine_accuracy@5 0.743
cosine_accuracy@10 0.8304
cosine_precision@1 0.5402
cosine_precision@3 0.4796
cosine_precision@5 0.3678
cosine_precision@10 0.2182
cosine_recall@1 0.2452
cosine_recall@3 0.5624
cosine_recall@5 0.701
cosine_recall@10 0.8228
cosine_ndcg@10 0.6886
cosine_mrr@10 0.6147
cosine_map@100 0.6544

Information Retrieval

Metric Value
cosine_accuracy@1 0.4353
cosine_accuracy@3 0.5332
cosine_accuracy@5 0.6311
cosine_accuracy@10 0.7622
cosine_precision@1 0.4353
cosine_precision@3 0.3945
cosine_precision@5 0.3094
cosine_precision@10 0.1983
cosine_recall@1 0.1984
cosine_recall@3 0.4655
cosine_recall@5 0.5911
cosine_recall@10 0.7468
cosine_ndcg@10 0.5953
cosine_mrr@10 0.5139
cosine_map@100 0.5592

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,146 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 17.8 tokens
    • max: 41 tokens
    • min: 66 tokens
    • mean: 225.02 tokens
    • max: 384 tokens
  • Samples:
    anchor positive
    Does Ray Train work with vanilla TensorFlow in addition to TensorFlow with Keras? Get Started with Distributed Training using TensorFlow/Keras#
    Ray Train’s TensorFlow integration enables you
    to scale your TensorFlow and Keras training functions to many machines and GPUs.
    On a technical level, Ray Train schedules your training workers
    and configures TF_CONFIG for you, allowing you to run
    your MultiWorkerMirroredStrategy training script. See Distributed
    training with TensorFlow
    for more information.
    Most of the examples in this guide use TensorFlow with Keras, but
    Ray Train also works with vanilla TensorFlow.

    Quickstart#
    import ray
    import tensorflow as tf

    from ray import train
    from ray.train import ScalingConfig
    from ray.train.tensorflow import TensorflowTrainer
    from ray.train.tensorflow.keras import ReportCheckpointCallback

    # If using GPUs, set this to True.
    use_gpu = False

    a = 5
    b = 10
    size = 100
    What type of failure can Ray automatically recover from? Ray can automatically recover from data loss but not owner failure.

    Recovering from data loss#
    When an object value is lost from the object store, such as during node
    failures, Ray will use lineage reconstruction to recover the object.
    Ray will first automatically attempt to recover the value by looking
    for copies of the same object on other nodes. If none are found, then Ray will
    automatically recover the value by re-executing
    the task that previously created the value. Arguments to the task are
    recursively reconstructed through the same mechanism.
    Lineage reconstruction currently has the following limitations:
    From which directory should you run the zip command to ensure the proper zip file structure? Suppose instead you want to host your files in your /some_path/example_dir directory remotely and provide a remote URI.
    You would need to first compress the example_dir directory into a zip file.
    There should be no other files or directories at the top level of the zip file, other than example_dir.
    You can use the following command in the Terminal to do this:
    cd /some_path
    zip -r zip_file_name.zip example_dir

    Note that this command must be run from the parent directory of the desired working_dir to ensure that the resulting zip file contains a single top-level directory.
    In general, the zip file’s name and the top-level directory’s name can be anything.
    The top-level directory’s contents will be used as the working_dir (or py_module).
    You can check that the zip file contains a single top-level directory by running the following command in the Terminal:
    zipinfo -1 zip_file_name.zip
    # example_dir/
    # example_dir/my_file_1.txt
    # example_dir/subdir/my_file_2.txt
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.9938 10 44.0311 - - - - -
1.0 11 - 0.6797 0.6651 0.6439 0.6180 0.4996
0.9938 10 14.5908 - - - - -
1.0 11 - 0.7179 0.7034 0.6927 0.6658 0.5720
1.8944 20 8.5538 - - - - -
2.0 22 - 0.7295 0.7209 0.7109 0.6793 0.5942
2.7950 30 6.916 - - - - -
3.0 33 - 0.7382 0.7293 0.7149 0.6916 0.5939
3.6957 40 6.5704 - - - - -
4.0 44 - 0.7387 0.7303 0.7168 0.6886 0.5953
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}