AShi846's picture
Add new SentenceTransformer model
1601014 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:475
  - loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: |-
      Explain how precise exceptions are implemented in
        dynamically-scheduled out-of-order processors.
    sentences:
      - "$ \text{Var}[\\wv^\top \\xx] = \frac1N \\sum_{n=1}^N (\\wv^\top \\xx_n)^2$ %\n"
      - >-
        Only $x_3$ has a positive coefficient in $z$, we will pivot $x_3$. We
        have $\nearrow x_3 \longrightarrow \ x_3 \leq \; \infty \ (1),\ x_3 \leq
        3\ (2),\ x_3 \leq 2\ (3)$, Thus we use third equality to pivot $x_3$.
        Hence $x_3=\frac{1}{2}(4+3x_2-s_3)$. And we get \begin{align*}
        \hspace{1cm} x_1 &= 1 + \frac{1}{2}(4+3x_2-s_3) - s_1 \\ s_2 &= 3
        -\frac{1}{2}(4+3x_2-s_3)  + s_1  \\ x_3&=\frac{1}{2}(4+3x_2-s_3) \\
        \cline{1-2} z &= 4 - x_2  +  (4+3x_2-s_3) -  4s_1 \end{align*} That is
        \begin{align*} \hspace{1cm} x_1 &= 3 + \frac{3x_2}{2} -\frac{s_3}{2} -
        s_1 \\ s_2 &= 1 - \frac{3x_2}{2}  +\frac{s_3}{2} + s_1  \\ x_3&= 2+ 
        \frac{3x_2}{2} -\frac{s_3}{2}  \\ \cline{1-2} z &= 8 + 2x_2  +  -s_3 - 
        4s_1 \\ x_1& :=3\text{  }x_2:=0\text{  }x_3:=2\text{  }s_1:=0\text{ 
        }s_2:=1\text{  }s_3:=0 \end{align*}
      - >-
        This does not break compatibility, as the method is private so nobody
        else could call it.
  - source_sentence: >-
      How is it possible to compute the average Precision/Recall curves? Explain
      in detail the

      various steps of the computation.
    sentences:
      - >-
        there are 12 different bigrams (denoting here the whitespace with 'X' to
        better see it): Xc, Xh,

        Xt, at, ca, cu, eX, ha, he, tX, th, ut,
      - |-
        1. Dynamically scheduled processors have universally more
                physical registers than the typical 32 architectural ones and
                they are used for removing WARs and WAW (name
                dependencies). In VLIW processors, the same renaming must be
                done by the compiler and all registers must be architecturally
                visible.
                2. Also, various techniques essential to improve the
                performance of VLIW processors consume more registers (e.g.,
                loop unrolling or loop fusion). 
      - >-
        G consists of syntactic rules.

        G should be complemented with lexical rules with the following format:

        T --> w, where T is a pre-terminal (i.e. a Part-of-Speech tag) and w is
        a terminal (i.e. a word).
  - source_sentence: >-
      You have $1$ Euro and your goal is to exchange it to Swiss francs during
      the next two consecutive days. The exchange rate is an arbitrary function
      from days to real numbers from the interval $[1,W^2]$, where $W\geq 1$ is
      known to the algorithm. More precisely, at  day $1$, you learn the
      exchange rate $x_1 \in [1,W^2]$, where $x_1$ is the amount of Swiss francs
      you can buy from $1$ Euro. You then need to decide between the following
      two options: \begin{enumerate}[label=(\roman*)] \item Trade the whole $1$
      Euro at day $1$ and receive $x_1$ Swiss francs. \item Wait and trade the
      whole $1$ Euro at day $2$ at  exchange rate $x_2 \in [1,W^2]$. The
      exchange rate $x_2$ is known only at day 2, i.e., after you made your
      decision at day 1. \end{enumerate} In the following two subproblems, we
      will analyze the competitive ratio of optimal deterministic algorithms.
      Recall that we say that an online algorithm  is $c$-competitive  if, for
      any $x_1, x_2 \in [1,W^2]$, it exchanges the $1$ Euro into at least $c
      \cdot \max\{x_1, x_2\}$ Swiss francs. Show that any deterministic
      algorithm has  a competitive ratio of at most $1/W$. {\em (In this problem
      you are asked to prove that any deterministic algorithm has a competitive
      ratio of at most $1/W$ for the above problem.    Recall that you are
      allowed to refer to material covered in the lecture notes.)}
    sentences:
      - >-
        We use the idea of the AMS algorithm. We first describe how Alice Alice
        calculates the message $m$. Let $\Alg$ be the following procedure:
        \begin{itemize} \item Select a random $h: [n] \rightarrow \{\pm 1\}$
        $4$-wise independent hash function. $h$ takes $O(\log n)$ bits to store.
        \item Calculate $A = \sum_{i=1}^n h(i) x_i$. \end{itemize} Let
        $t=6/\epsilon^2$. Alice runs $\Alg$ $t$ times. Let $h_i$ and $A_i$ be
        the hash function and the quantity calculated by $i$:th invokation of
        \Alg. Then Alice transmits the information $h_1, A_1, h_2, A_2, \ldots,
        h_t, A_t$ to Bob. Note that each $h_i$ takes $O(\log n)$ bits to store
        and each $A_i$ is an integer between $-n^2$ and $n^2$ and so it also
        takes $O(\log n)$ bits to store. Therefore the message Alice transmits
        to Bob is $O(\log(n)/\epsilon^2)$ bits. Now Bob calculates the estimate
        $Z$ as follows: \begin{itemize} \item For $\ell = 1, 2, \ldots, t$, let
        $Z_\ell = A_\ell +  \sum_{i=1}^n h_\ell(i) y_i$. \item Output $Z =
        \frac{\sum_{\ell=1}^t Z_\ell^2}{t}.$ \end{itemize} To prove that $Z$
        satisfies~\eqref{eq:guaranteeStream}, we first analyze a single
        $Z_\ell$. First, note that $Z_\ell = A_\ell + \sum_{i=1}^n h_\ell(i)y_i
        = \sum_{i=1}^n h_\ell(i) (x_i + y_i) = \sum_{i=1}^n h_\ell(i) f_i$,
        where we let $f_i = x_i + y_i$. And so $Z_\ell = \sum_{i=1}^n h_\ell(i)
        f_i$ where $h_\ell$ is a random $4$-wise independent hash function. This
        is exactly the setting of the analysis of the AMS streaming algorithm
        seen in class. And so over the random selection of the hash function, we
        know that \begin{align*} \E[Z_\ell^2] = \sum_{i=1}^n f_i^2 = Q
        \end{align*} and \begin{align*} \Var[Z_\ell^2] \leq 2\left( \sum_{i=1}^n
        f_i^2 \right)^2 = 2Q^2\,. \end{align*} Therefore, we have that
        \begin{align*} \E[Z] =  Q \qquad \mbox{and} \qquad \Var[Z] \leq \frac{2
        Q^2}{t}\,. \end{align*} So by Chebychev's inequality \begin{align*}
        \Pr[|Z- Q| \geq \epsilon Q] \leq \frac{2Q^2/t}{\epsilon^2 Q^2} \leq
        1/3\,, \end{align*} by the selection of $t = 6/\epsilon^2$.
      - False.
      - >-
        Yes, it is possible.      d1>d2: without adding any document, it holds
        true     d2>d1: adding d3=”aaaa”
  - source_sentence: >-
      One of your colleagues has recently taken over responsibility for a legacy
      codebase, a library currently used by some of your customers. Before
      making functional changes, your colleague found a bug caused by incorrect
      use of the following method in the codebase:


      public class User {
          /** Indicates whether the user’s browser, if any, has JavaScript enabled. */
          public boolean hasJavascriptEnabled() {  }

          //  other methods, such as getName(), getAge(), ...
      }


      Your colleague believes that this is a bad API. You are reviewing the pull
      request your colleague made to fix this bug. After some discussion and
      additional commits to address feedback, the pull request is ready. You can
      either "squash" the pull request into a single commit, or leave the
      multiple commits as they are. Explain in 1 sentence whether you should
      "squash" and why.
    sentences:
      - >-
        causal language modeling

        learns to predict the next word, which you would need to generate a
        story.
      - >-
        It is not suitable as the item is not specified properly ("doesn't
        render well" is not concrete). A bug item has to include details on what
        is wrong with the user experience.
      - >-
        We name “central” the city that we can reach from every other city
        either directly or through exactly one intermediate city.

        Base case (n=2): It obviously holds. Either one of the cities is
        “central”.

        Inductive step: Suppose this property holds for n  2 cities. We will
        prove that it will still hold for n+1 cities.


        Let n+1 cities, ci, i=0, ..., n, where for every pair of different
        cities ci, cj, there exists a direct route

        (single direction) either from ci to cj or from cj to ci.

        We consider only the first n cities, i.e. cities ci, i=0, ..., n-1.
        According to the inductive step, there

        exists one central city among these n cities. Let cj be that city.

        We now exclude city cj and consider the rest of the cities. Again, we
        have n cities, therefore there should exist one city among them that is
        central. Let ck be that city.

        All cities apart from cj and ck can reach cj and ck either directly or
        through one intermediate city.

        Furthermore, there exists a route between cj and ck:

         If the route is directed from cj to ck, then ck is the central city
        for the n+1 cities.

         If the route is directed from ck to cj, then cj is the central city
        for the n+1 cities.
  - source_sentence: >-
      The data contains information about submissions to a prestigious machine
      learning conference called ICLR. Columns:

      year, paper, authors, ratings, decisions, institution, csranking,
      categories, authors_citations, authors_publications, authors_hindex,
      arxiv. The data is stored in a pandas.DataFrame format. 


      Create two fields called has_top_company and has_top_institution. The
      field has_top_company equals 1 if the article contains an author in the
      following list of companies ["Facebook", "Google", "Microsoft",
      "Deepmind"], and 0 otherwise. The field has_top_institution equals 1 if
      the article contains an author in the top 10 institutions according to
      CSRankings.
    sentences:
      - >-
        Let $S$ be a minimum $s,t$-cut; then the number of edges cut by $S$ is
        $\opt$. We shall exhibit a feasible solution $y$ to the linear program
        such that value of $y$ is $\opt$. This then implies that $\optlp \leq
        \opt$ as the minimum value of a solution to the linear program is at
        most the value of $y$. Define $y$ as follows: for each $e\in E$
        \begin{align*} y_e = \begin{cases} 1 & \mbox{if $e$ is cut by $S$,}\\ 0
        & \mbox{otherwise.} \end{cases} \end{align*} Notice that, by this
        definition, $\sum_{e\in E} y_e = \opt$. We proceed to show that $y$ is a
        feasible solution: \begin{itemize} \item for each $e\in E$, we have $y_e
        \geq 0$; \item for each $p\in P$, we have $\sum_{e\in p} y_e \geq 1$
        since any path from $s$ to $t$ must exit the set $S$. Indeed, $S$
        contains $s$ but it does not contain $t$, and these edges (that have one
        end point in $S$ and one end point outside of $S$) have $y$-value equal
        to $1$. \end{itemize}
      - '1'
      - >-
        Recall that, in the Hedge algorithm we learned in class, the total loss
        over time is upper bounded by $\sum_{t = 1}^T m_i^t + \frac{\ln
        N}{\epsilon} + \epsilon T$. In the case of investments, we want to do
        almost as good as the best investment. Let $g_i^t$ be the fractional
        change of the value of $i$'th investment at time $t$. I.e., $g_i^t =
        (100 + change(i))/100$, and $p_i^{t+1} = p_i^{t} \cdot g_i^t$. Thus,
        after time $T$, $p_i^{T+1} = p_i^1 \prod_{t = 1}^T g_i^t$. To get an
        analogous bound to that of the Hedge algorithm, we take the logarithm.
        The logarithm of the total gain would be $\sum_{t=1}^T \ln g_i^t$. To
        convert this into a loss, we multiply this by $-1$, which gives a loss
        of $\sum_{t=1}^T (- \ln g_i^t)$. Hence, to do almost as good as the best
        investment, we make our cost vectors to be $m_i^t = - \ln g_i^t$. Now,
        from the analysis of Hedge algorithm in the lecture, it follows that for
        all $i \in [N]$, $$\sum_{t = 1}^T p^{(t)}_i \cdot m^{(t)} \leq \sum_{t =
        1}^{T} m^{(t)}_i + \frac{\ln N}{\epsilon} + \epsilon T.$$ Taking the
        exponent in both sides, We have that \begin{align*} \exp \left( \sum_{t
        = 1}^T p^{(t)}_i \cdot m^{(t)} \right) &\leq \exp \left( \sum_{t =
        1}^{T} m^{(t)}_i + \frac{\ln N}{\epsilon} + \epsilon T \right)\\
        \prod_{t = 1}^T \exp( p^{(t)}_i \cdot m^{(t)} ) &\leq \exp( \ln N /
        \epsilon + \epsilon T) \prod_{t = 1}^T \exp(m^t_i) \\ \prod_{t = 1}^T
        \prod_{i \in [N]} (1 / g_i^t)^{p^{(t)}_i} &\leq \exp( \ln N / \epsilon +
        \epsilon T) \prod_{t = 1}^{T} (1/g^{(t)}_i) \end{align*} Taking the
        $T$-th root on both sides, \begin{align*} \left(\prod_{t = 1}^T \prod_{i
        \in [N]} (1 / g_i^t)^{p^{(t)}_i} \right)^{(1/T)} &\leq \exp( \ln N /
        \epsilon  T + \epsilon ) \left( \prod_{t = 1}^{T} (1/g^{(t)}_i)
        \right)^{(1/T)}. \end{align*} This can be interpreted as the weighted
        geometric mean of the loss is not much worse than the loss of the best
        performing investment.
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AShi846/all-MiniLM-L6-v2_rag_ft_e-5")
# Run inference
sentences = [
    'The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns:\nyear, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. \n\nCreate two fields called has_top_company and has_top_institution. The field has_top_company equals 1 if the article contains an author in the following list of companies ["Facebook", "Google", "Microsoft", "Deepmind"], and 0 otherwise. The field has_top_institution equals 1 if the article contains an author in the top 10 institutions according to CSRankings.',
    "Recall that, in the Hedge algorithm we learned in class, the total loss over time is upper bounded by $\\sum_{t = 1}^T m_i^t + \\frac{\\ln N}{\\epsilon} + \\epsilon T$. In the case of investments, we want to do almost as good as the best investment. Let $g_i^t$ be the fractional change of the value of $i$'th investment at time $t$. I.e., $g_i^t = (100 + change(i))/100$, and $p_i^{t+1} = p_i^{t} \\cdot g_i^t$. Thus, after time $T$, $p_i^{T+1} = p_i^1 \\prod_{t = 1}^T g_i^t$. To get an analogous bound to that of the Hedge algorithm, we take the logarithm. The logarithm of the total gain would be $\\sum_{t=1}^T \\ln g_i^t$. To convert this into a loss, we multiply this by $-1$, which gives a loss of $\\sum_{t=1}^T (- \\ln g_i^t)$. Hence, to do almost as good as the best investment, we make our cost vectors to be $m_i^t = - \\ln g_i^t$. Now, from the analysis of Hedge algorithm in the lecture, it follows that for all $i \\in [N]$, $$\\sum_{t = 1}^T p^{(t)}_i \\cdot m^{(t)} \\leq \\sum_{t = 1}^{T} m^{(t)}_i + \\frac{\\ln N}{\\epsilon} + \\epsilon T.$$ Taking the exponent in both sides, We have that \\begin{align*} \\exp \\left( \\sum_{t = 1}^T p^{(t)}_i \\cdot m^{(t)} \\right) &\\leq \\exp \\left( \\sum_{t = 1}^{T} m^{(t)}_i + \\frac{\\ln N}{\\epsilon} + \\epsilon T \\right)\\\\ \\prod_{t = 1}^T \\exp( p^{(t)}_i \\cdot m^{(t)} ) &\\leq \\exp( \\ln N / \\epsilon + \\epsilon T) \\prod_{t = 1}^T \\exp(m^t_i) \\\\ \\prod_{t = 1}^T \\prod_{i \\in [N]} (1 / g_i^t)^{p^{(t)}_i} &\\leq \\exp( \\ln N / \\epsilon + \\epsilon T) \\prod_{t = 1}^{T} (1/g^{(t)}_i) \\end{align*} Taking the $T$-th root on both sides, \\begin{align*} \\left(\\prod_{t = 1}^T \\prod_{i \\in [N]} (1 / g_i^t)^{p^{(t)}_i} \\right)^{(1/T)} &\\leq \\exp( \\ln N / \\epsilon  T + \\epsilon ) \\left( \\prod_{t = 1}^{T} (1/g^{(t)}_i) \\right)^{(1/T)}. \\end{align*} This can be interpreted as the weighted geometric mean of the loss is not much worse than the loss of the best performing investment.",
    '1',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 475 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 475 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 5 tokens
    • mean: 135.81 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 110.0 tokens
    • max: 256 tokens
    • min: 0.1
    • mean: 0.1
    • max: 0.1
  • Samples:
    sentence_0 sentence_1 label
    Assume that your team is discussing the following java code:

    public final class DataStructure {
    public void add(int val) { /.../ }

    private boolean isFull() { /.../ }
    }

    Your colleagues were changing the parameter type of "add" to an "Integer". Explain whether this breaks backward compatibility and why or why not (also without worrying about whether this is a good or a bad thing).
    D(cat,dog)=2
    D(cat,pen)=6
    D(cat,table)=6
    D(dog,pen)=6
    D(dog,table)=6
    D(pen,table)=2
    0.1
    If several elements are ready in a reservation station, which
    one do you think should be selected? extbf{Very briefly} discuss
    the options.
    Obama SLOP/1 Election returns document 3 Obama SLOP/2 Election returns documents 3 and T Obama SLOP/5 Election returns documents 3,1, and 2 Thus the values are X=1, x=2, and x=5 Obama = (4 : {1 - [3}, {2 - [6]}, {3 [2,17}, {4 - [1]}) Election = (4: {1 - [4)}, (2 - [1, 21), {3 - [3]}, {5 - [16,22, 51]}) 0.1
    If process i fails, then eventually all processes j≠i fail
    Is the following true? If no process j≠i fails, then process i has failed
    No, it is almost certain that it would not work. On a
    dynamically-scheduled processor, the user is not supposed to
    see the returned value from a speculative load because it will
    never be committed; the whole idea of the attack is to make
    speculatively use of the result and leave a microarchitectural
    trace of the value before the instruction is squashed. In
    Itanium, the returned value of the speculative load
    instruction is architecturally visible and checking whether
    the load is valid is left to the compiler which, in fact,
    might or might not perform such a check. In this context, it
    would have been a major implementation mistake if the value
    loaded speculatively under a memory access violation were the
    true one that the current user is not allowed to access;
    clearly, the implementa...
    0.1
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}