AShi846's picture
Add new SentenceTransformer model
776cc26 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:475
  - loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: >-
      We will analyze the $K$-means algorithm and show that it always converge.
      Let us consider the $K$-means objective function: $$
      \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K}
      z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2} $$ where
      $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and
      $\boldsymbol{\mu}_{k} \in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1,
      \ldots, N$. How would you choose
      $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ to minimize
      $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{z_{n
      k}\right\}_{n, k=1}^{N, K}$ ? Compute the closed-form formula for the
      $\boldsymbol{\mu}_{k}$. To which step of the $K$-means algorithm does it
      correspond?
    sentences:
      - |-
        1. Dynamically scheduled processors have universally more
                physical registers than the typical 32 architectural ones and
                they are used for removing WARs and WAW (name
                dependencies). In VLIW processors, the same renaming must be
                done by the compiler and all registers must be architecturally
                visible.
                2. Also, various techniques essential to improve the
                performance of VLIW processors consume more registers (e.g.,
                loop unrolling or loop fusion). 
      - >-
        O( (f+1)n^2 )b in the binary case, or O( (f+1)n^3 )b in the non-binary
        case
      - >-
        The idea is wrong. Even if the interface remains the same since we are
        dealing with character strings, a decorator does not make sense because
        the class returning JSON cannot be used without this decorator; the
        logic for extracting the weather prediction naturally belongs to the
        weather client in question. It is therefore better to create a class
        containing both the download of the JSON and the extraction of the
        weather forecast.
  - source_sentence: >-
      Estimate the 95% confidence intervals of the geometric mean and the
      arithmetic mean of pageviews using bootstrap resampling. The data is given
      in a pandas.DataFrame called df and the respective column is called
      "pageviews". You can use the scipy.stats python library.
    sentences:
      - >-
        (a) PoS tagging, but also Information Retrieval (IR), Text
        Classification, Information Extraction. For the later, accuracy sounds
        like precision (but it depends on what we actually mean by 'task' (vs.
        subtask)) . (b) a reference must be available, 'correct' and 'incorrect'
        must be clearly defined
      - >-
        [['break+V => breakable\xa0', 'derivational'], ['freeze+V
        =>\xa0frozen\xa0', 'inflectional'], ['translate+V => translation',
        'derivational'], ['cat+N => cats', 'inflectional'], ['modify+V =>
        modifies ', 'inflectional']]
      - Dynamically scheduled out-of-order processors.
  - source_sentence: >-
      The data contains information about submissions to a prestigious machine
      learning conference called ICLR. Columns:

      year, paper, authors, ratings, decisions, institution, csranking,
      categories, authors_citations, authors_publications, authors_hindex,
      arxiv. The data is stored in a pandas.DataFrame format. 


      Create 3 new fields in the dataframe corresponding to the median value of
      the number of citations per author, the number of publications per author,
      and the h-index per author. So for instance, for the row
      authors_publications, you will create an additional column, e.g.
      authors_publications_median, containing the median number of publications
      per author in each paper.
    sentences:
      - >-
        Consider an deterministic online algorithm $\Alg$ and set $x_1 = W$.
        There are two cases depending on whether \Alg trades the $1$ Euro the 
        first day or not.  Suppose first that $\Alg$ trades the Euro at day $1$.
        Then we set $x_2 = W^2$ and so the algorithm is only $W/W^2 = 1/W$
        competitive. For the other case when  \Alg waits for the second day,  we
        set $x_2 = 1$. Then \Alg gets $1$ Swiss franc whereas optimum would get
        $W$ and so the algorithm is only $1/W$ competitive again.
      - >-
        This is an abstraction leak: the notion of JavaScript and even a browser
        is a completely different level of abstraction than users, so this
        method will likely lead to bugs.
      - >-
        We could consider at least two approaches here: either binomial
        confidence interval or t-test. • binomial confidence interval:
        evaluation of a binary classifier (success or not) follow a binomial law
        with parameters (perror,T), where T is the test-set size (157 in the
        above question; is it big enough?). Using normal approximation of the
        binomial law, the width of the confidence interval around estimated
        error probability is q(α)*sqrt(pb*(1-pb)/T),where q(α) is the 1-α
        quantile (for a 1 - α confidence level) and pb is the estimation of
        perror. We here want this confidence interval width to be 0.02, and have
        pb = 0.118 (and 'know' that q(0.05) = 1.96 from normal distribution
        quantile charts); thus we have to solve: (0.02)^2 =
        (1.96)^2*(0.118*(1-0.118))/T Thus T ≃ 1000. • t-test approach: let's
        consider estimating their relative behaviour on each of the test cases
        (i.e. each test estimation subset is of size 1). If the new system as an
        error of 0.098 (= 0.118 - 0.02), it can vary from system 3 between 0.02
        of the test cases (both systems almost always agree but where the new
        system improves the results) and 0.216 of the test cases (the two
        systems never make their errors on the same test case, so they disagree
        on 0.118 + 0.098 of the cases). Thus μ of the t-test is between 0.02 and
        0.216. And s = 0.004 (by assumption, same variance). Thus t is between
        5*sqrt(T) and 54*sqrt(T) which is already bigger than 1.645 for any T
        bigger than 1. So this doesn't help much. So all we can say is that if
        we want to have a (lowest possible) difference of 0.02 we should have at
        least 1/0.02 = 50 test cases ;-) And if we consider that we have 0.216
        difference, then we have at least 5 test cases... The reason why these
        numbers are so low is simply because we here make strong assumptions
        about the test setup: that it is a paired evaluation. In such a case,
        having a difference (0.02) that is 5 times bigger than the standard
        deviation is always statistically significant at a 95% level.
  - source_sentence: >-
      In order to summarize the degree distribution in a single number, would
      you recommend using the average degree? Why, or why not? If not, what
      alternatives can you think of? Please elaborate!
    sentences:
      - >-
        inflectional morphology: no change in the grammatical category (e.g.
        give, given, gave, gives ) derivational morphology: change in category
        (e.g. process, processing, processable, processor, processabilty)
      - "$ \text{Var}[\\wv^\top \\xx] = \frac1N \\sum_{n=1}^N (\\wv^\top \\xx_n)^2$ %\n"
      - |-
        - $E(X) = 0.5$, $Var(X) = 1/12$
        - $E(Y) = 0.5$, $Var(Y) = 1/12$
        - $E(Z) = 0.6$, $Var(Z) = 1/24$
        - $E(K) = 0.6$, $Var(K) = 1/12$
  - source_sentence: |2-
       The [t-statistic](https://en.wikipedia.org/wiki/T-statistic) is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. In a t-test, the higher the t-statistic, the more confidently we can reject the null hypothesis. Use `numpy.random` to create four samples, each of size 30:
      - $X \sim Uniform(0,1)$
      - $Y \sim Uniform(0,1)$
      - $Z = X/2 + Y/2 + 0.1$
      - $K = Y + 0.1$
    sentences:
      - >-
        The simplest solution to produce the indexing set associated with a
        document is to use a stemmer associated with stop lists allowing to
        ignore specific non content bearing terms. In this case, the indexing
        set associated with D might be:


        $I(D)=\{2006$, export, increas, Switzerland, USA $\}$


        A more sophisticated approach would consist in using a lemmatizer in
        which case, the indexing set might be:


        $I(D)=\left\{2006 \_N U M\right.$, export\_Noun, increase\_Verb,
        Switzerland\_ProperNoun, USA\_ProperNoun\}
      - >-
        Including a major bugfix in a minor release instead of a bugfix release
        will cause an incoherent changelog and an inconvenience for users who
        wish to only apply the patch without any other changes. The bugfix could
        be as well an urgent security fix and should not wait to the next minor
        release date.
      - >-
        def get_vocabulary_frequency(documents):     """     It parses the input
        documents and creates a dictionary with the terms and term
        frequencies.          INPUT:     Doc1: hello hello world     Doc2: hello
        friend          OUTPUT:     {'hello': 3,     'world': 1,     'friend':
        1}      :param documents: list of list of str, with the tokenized
        documents.     :return: dict, with keys the words and values the
        frequency of each word.     """     vocabulary = dict()          for
        document in documents:         for word in document:             if word
        in vocabulary:                 vocabulary[word] += 1            
        else:                 vocabulary[word] = 1      return vocabulary
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("AShi846/fine-tuned-embedding-model")
# Run inference
sentences = [
    ' The [t-statistic](https://en.wikipedia.org/wiki/T-statistic) is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. In a t-test, the higher the t-statistic, the more confidently we can reject the null hypothesis. Use `numpy.random` to create four samples, each of size 30:\n- $X \\sim Uniform(0,1)$\n- $Y \\sim Uniform(0,1)$\n- $Z = X/2 + Y/2 + 0.1$\n- $K = Y + 0.1$',
    'def get_vocabulary_frequency(documents):     """     It parses the input documents and creates a dictionary with the terms and term frequencies.          INPUT:     Doc1: hello hello world     Doc2: hello friend          OUTPUT:     {\'hello\': 3,     \'world\': 1,     \'friend\': 1}      :param documents: list of list of str, with the tokenized documents.     :return: dict, with keys the words and values the frequency of each word.     """     vocabulary = dict()          for document in documents:         for word in document:             if word in vocabulary:                 vocabulary[word] += 1             else:                 vocabulary[word] = 1      return vocabulary',
    'Including a major bugfix in a minor release instead of a bugfix release will cause an incoherent changelog and an inconvenience for users who wish to only apply the patch without any other changes. The bugfix could be as well an urgent security fix and should not wait to the next minor release date.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 475 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 475 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 5 tokens
    • mean: 135.81 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 110.0 tokens
    • max: 256 tokens
    • min: 0.1
    • mean: 0.1
    • max: 0.1
  • Samples:
    sentence_0 sentence_1 label
    You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{ l l
    Describe the techniques that typical dynamically scheduled
    processors use to achieve the same purpose of the following features
    of Intel Itanium: (a) Predicated execution; (b) advanced
    loads---that is, loads moved before a store and explicit check for
    RAW hazards; (c) speculative loads---that is, loads moved before a
    branch and explicit check for exceptions; (d) rotating register
    file.
    Alice and Bob can both apply the AMS sketch with constant precision and failure probability $1/n^2$ to their vectors. Then Charlie subtracts the sketches from each other, obtaining a sketch of the difference. Once the sketch of the difference is available, one can find the special word similarly to the previous problem. 0.1
    Design and analyze a polynomial time algorithm for the following problem: \begin{description} \item[INPUT:] An undirected graph $G=(V,E)$. \item[OUTPUT:] A non-negative vertex potential $p(v)\geq 0$ for each vertex $v\in V$ such that \begin{align*} \sum_{v\in S} p(v) \leq E(S, \bar S) \quad \mbox{for every $\emptyset \neq S \subsetneq V$ \quad and \quad $\sum_{v\in V} p(v)$ is maximized.} \end{align*} \end{description} {\small (Recall that $E(S, \bar S)$ denotes the set of edges that cross the cut defined by $S$, i.e., $E(S, \bar S) = {e\in E:
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}