File size: 30,252 Bytes
776cc26 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:475
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: 'We will analyze the $K$-means algorithm and show that it always
converge. Let us consider the $K$-means objective function: $$ \mathcal{L}(\mathbf{z},
\boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2}
$$ where $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and $\boldsymbol{\mu}_{k}
\in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1, \ldots, N$. How would you choose
$\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ to minimize $\mathcal{L}(\mathbf{z},
\boldsymbol{\mu})$ for given $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ ? Compute
the closed-form formula for the $\boldsymbol{\mu}_{k}$. To which step of the $K$-means
algorithm does it correspond?'
sentences:
- "1. Dynamically scheduled processors have universally more\n physical registers\
\ than the typical 32 architectural ones and\n they are used for removing\
\ WARs and WAW (name\n dependencies). In VLIW processors, the same renaming\
\ must be\n done by the compiler and all registers must be architecturally\n\
\ visible.\n 2. Also, various techniques essential to improve the\n\
\ performance of VLIW processors consume more registers (e.g.,\n \
\ loop unrolling or loop fusion). "
- O( (f+1)n^2 )b in the binary case, or O( (f+1)n^3 )b in the non-binary case
- The idea is wrong. Even if the interface remains the same since we are dealing
with character strings, a decorator does not make sense because the class returning
JSON cannot be used without this decorator; the logic for extracting the weather
prediction naturally belongs to the weather client in question. It is therefore
better to create a class containing both the download of the JSON and the extraction
of the weather forecast.
- source_sentence: Estimate the 95% confidence intervals of the geometric mean and
the arithmetic mean of pageviews using bootstrap resampling. The data is given
in a pandas.DataFrame called df and the respective column is called "pageviews".
You can use the scipy.stats python library.
sentences:
- (a) PoS tagging, but also Information Retrieval (IR), Text Classification, Information
Extraction. For the later, accuracy sounds like precision (but it depends on what
we actually mean by 'task' (vs. subtask)) . (b) a reference must be available,
'correct' and 'incorrect' must be clearly defined
- '[[''break+V => breakable\xa0'', ''derivational''], [''freeze+V =>\xa0frozen\xa0'',
''inflectional''], [''translate+V => translation'', ''derivational''], [''cat+N
=> cats'', ''inflectional''], [''modify+V => modifies '', ''inflectional'']]'
- Dynamically scheduled out-of-order processors.
- source_sentence: "The data contains information about submissions to a prestigious\
\ machine learning conference called ICLR. Columns:\nyear, paper, authors, ratings,\
\ decisions, institution, csranking, categories, authors_citations, authors_publications,\
\ authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. \n\n\
Create 3 new fields in the dataframe corresponding to the median value of the\
\ number of citations per author, the number of publications per author, and the\
\ h-index per author. So for instance, for the row authors_publications, you will\
\ create an additional column, e.g. authors_publications_median, containing the\
\ median number of publications per author in each paper."
sentences:
- Consider an deterministic online algorithm $\Alg$ and set $x_1 = W$. There are
two cases depending on whether \Alg trades the $1$ Euro the first day or not. Suppose
first that $\Alg$ trades the Euro at day $1$. Then we set $x_2 = W^2$ and so the
algorithm is only $W/W^2 = 1/W$ competitive. For the other case when \Alg waits
for the second day, we set $x_2 = 1$. Then \Alg gets $1$ Swiss franc whereas
optimum would get $W$ and so the algorithm is only $1/W$ competitive again.
- 'This is an abstraction leak: the notion of JavaScript and even a browser is a
completely different level of abstraction than users, so this method will likely
lead to bugs.'
- 'We could consider at least two approaches here: either binomial confidence interval
or t-test. • binomial confidence interval: evaluation of a binary classifier (success
or not) follow a binomial law with parameters (perror,T), where T is the test-set
size (157 in the above question; is it big enough?). Using normal approximation
of the binomial law, the width of the confidence interval around estimated error
probability is q(α)*sqrt(pb*(1-pb)/T),where q(α) is the 1-α quantile (for a 1
- α confidence level) and pb is the estimation of perror. We here want this confidence
interval width to be 0.02, and have pb = 0.118 (and ''know'' that q(0.05) = 1.96
from normal distribution quantile charts); thus we have to solve: (0.02)^2 = (1.96)^2*(0.118*(1-0.118))/T
Thus T ≃ 1000. • t-test approach: let''s consider estimating their relative behaviour
on each of the test cases (i.e. each test estimation subset is of size 1). If
the new system as an error of 0.098 (= 0.118 - 0.02), it can vary from system
3 between 0.02 of the test cases (both systems almost always agree but where the
new system improves the results) and 0.216 of the test cases (the two systems
never make their errors on the same test case, so they disagree on 0.118 + 0.098
of the cases). Thus μ of the t-test is between 0.02 and 0.216. And s = 0.004 (by
assumption, same variance). Thus t is between 5*sqrt(T) and 54*sqrt(T) which is
already bigger than 1.645 for any T bigger than 1. So this doesn''t help much.
So all we can say is that if we want to have a (lowest possible) difference of
0.02 we should have at least 1/0.02 = 50 test cases ;-) And if we consider that
we have 0.216 difference, then we have at least 5 test cases... The reason why
these numbers are so low is simply because we here make strong assumptions about
the test setup: that it is a paired evaluation. In such a case, having a difference
(0.02) that is 5 times bigger than the standard deviation is always statistically
significant at a 95% level.'
- source_sentence: In order to summarize the degree distribution in a single number,
would you recommend using the average degree? Why, or why not? If not, what alternatives
can you think of? Please elaborate!
sentences:
- 'inflectional morphology: no change in the grammatical category (e.g. give, given,
gave, gives ) derivational morphology: change in category (e.g. process, processing,
processable, processor, processabilty)'
- "$ \text{Var}[\\wv^\top \\xx] = \frac1N \\sum_{n=1}^N (\\wv^\top \\xx_n)^2$ %\n"
- '- $E(X) = 0.5$, $Var(X) = 1/12$
- $E(Y) = 0.5$, $Var(Y) = 1/12$
- $E(Z) = 0.6$, $Var(Z) = 1/24$
- $E(K) = 0.6$, $Var(K) = 1/12$'
- source_sentence: ' The [t-statistic](https://en.wikipedia.org/wiki/T-statistic)
is the ratio of the departure of the estimated value of a parameter from its hypothesized
value to its standard error. In a t-test, the higher the t-statistic, the more
confidently we can reject the null hypothesis. Use `numpy.random` to create four
samples, each of size 30:
- $X \sim Uniform(0,1)$
- $Y \sim Uniform(0,1)$
- $Z = X/2 + Y/2 + 0.1$
- $K = Y + 0.1$'
sentences:
- 'The simplest solution to produce the indexing set associated with a document
is to use a stemmer associated with stop lists allowing to ignore specific non
content bearing terms. In this case, the indexing set associated with D might
be:
$I(D)=\{2006$, export, increas, Switzerland, USA $\}$
A more sophisticated approach would consist in using a lemmatizer in which case,
the indexing set might be:
$I(D)=\left\{2006 \_N U M\right.$, export\_Noun, increase\_Verb, Switzerland\_ProperNoun,
USA\_ProperNoun\}'
- Including a major bugfix in a minor release instead of a bugfix release will cause
an incoherent changelog and an inconvenience for users who wish to only apply
the patch without any other changes. The bugfix could be as well an urgent security
fix and should not wait to the next minor release date.
- 'def get_vocabulary_frequency(documents): """ It parses the input documents
and creates a dictionary with the terms and term frequencies. INPUT: Doc1:
hello hello world Doc2: hello friend OUTPUT: {''hello'': 3, ''world'':
1, ''friend'': 1} :param documents: list of list of str, with the tokenized
documents. :return: dict, with keys the words and values the frequency of
each word. """ vocabulary = dict() for document in documents: for
word in document: if word in vocabulary: vocabulary[word]
+= 1 else: vocabulary[word] = 1 return vocabulary'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AShi846/fine-tuned-embedding-model")
# Run inference
sentences = [
' The [t-statistic](https://en.wikipedia.org/wiki/T-statistic) is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. In a t-test, the higher the t-statistic, the more confidently we can reject the null hypothesis. Use `numpy.random` to create four samples, each of size 30:\n- $X \\sim Uniform(0,1)$\n- $Y \\sim Uniform(0,1)$\n- $Z = X/2 + Y/2 + 0.1$\n- $K = Y + 0.1$',
'def get_vocabulary_frequency(documents): """ It parses the input documents and creates a dictionary with the terms and term frequencies. INPUT: Doc1: hello hello world Doc2: hello friend OUTPUT: {\'hello\': 3, \'world\': 1, \'friend\': 1} :param documents: list of list of str, with the tokenized documents. :return: dict, with keys the words and values the frequency of each word. """ vocabulary = dict() for document in documents: for word in document: if word in vocabulary: vocabulary[word] += 1 else: vocabulary[word] = 1 return vocabulary',
'Including a major bugfix in a minor release instead of a bugfix release will cause an incoherent changelog and an inconvenience for users who wish to only apply the patch without any other changes. The bugfix could be as well an urgent security fix and should not wait to the next minor release date.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 475 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 475 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 135.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 110.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.1</li><li>max: 0.1</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement per week \\ \hline Vitamin A [mg/kg] & 35 & 0.5 & 0.5 & 0.5 mg \\ Vitamin B [mg/kg] & 60 & 300 & 0.5 & 15 mg \\ Vitamin C [mg/kg] & 30 & 20 & 70 & 4 mg \\ \hline [price [CHF/kg] & 50 & 75 & 60 & --- \\ \hline \end{tabular} \end{center} Formulate the problem of finding the cheapest combination of the different fondues (moitie moitie \& a la tomate) and Raclette so as to satisfy the weekly nutritional requirement as a linear program.</code> | <code>1. The adjacency graph has ones everywhere except for (i) no<br> edges between exttt{sum} and exttt{i}, and between<br> exttt{sum} and exttt{y\_coord}, and (ii) five on the edge<br> between exttt{x\_coord} and exttt{y\_coord}, and two on the<br> edge between exttt{i} and exttt{y\_coord}.<br> 2. Any of these solution should be optimal either as shown or<br> reversed:<br> <br> - exttt{x\_coord}, exttt{y\_coord}, exttt{i},<br> exttt{j}, exttt{sum}<br> - exttt{j}, exttt{sum}, exttt{x\_coord},<br> exttt{y\_coord}, exttt{i}<br> - exttt{sum}, exttt{j}, exttt{x\_coord},<br> exttt{y\_coord}, exttt{i}<br> 3. Surely, this triad should be adjacent: exttt{x\_coord}, exttt{y\_coord}, exttt{i}.</code> | <code>0.1</code> |
| <code>Describe the techniques that typical dynamically scheduled<br> processors use to achieve the same purpose of the following features<br> of Intel Itanium: (a) Predicated execution; (b) advanced<br> loads---that is, loads moved before a store and explicit check for<br> RAW hazards; (c) speculative loads---that is, loads moved before a<br> branch and explicit check for exceptions; (d) rotating register<br> file.</code> | <code>Alice and Bob can both apply the AMS sketch with constant precision and failure probability $1/n^2$ to their vectors. Then Charlie subtracts the sketches from each other, obtaining a sketch of the difference. Once the sketch of the difference is available, one can find the special word similarly to the previous problem.</code> | <code>0.1</code> |
| <code>Design and analyze a polynomial time algorithm for the following problem: \begin{description} \item[INPUT:] An undirected graph $G=(V,E)$. \item[OUTPUT:] A non-negative vertex potential $p(v)\geq 0$ for each vertex $v\in V$ such that \begin{align*} \sum_{v\in S} p(v) \leq |E(S, \bar S)| \quad \mbox{for every $\emptyset \neq S \subsetneq V$ \quad and \quad $\sum_{v\in V} p(v)$ is maximized.} \end{align*} \end{description} {\small (Recall that $E(S, \bar S)$ denotes the set of edges that cross the cut defined by $S$, i.e., $E(S, \bar S) = \{e\in E: |e\cap S| = |e\cap \bar S| = 1\}$.)} \\[1mm] \noindent Hint: formulate the problem as a large linear program (LP) and then show that the LP can be solved in polynomial time. \\[1mm] {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it returns a correct solution and that it runs in polynomial time. Recall that you are allowed to refer to material covered in the course.) }</code> | <code>precision = tp/(tp+fp) recall = tp/(tp+fn) f_measure = 2*precision*recall/(precision+recall) print('F-measure: ', f_measure)</code> | <code>0.1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu126
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |