AShi846 commited on
Commit
776cc26
·
verified ·
1 Parent(s): f239c8f

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,447 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:475
8
+ - loss:CosineSimilarityLoss
9
+ base_model: sentence-transformers/all-MiniLM-L6-v2
10
+ widget:
11
+ - source_sentence: 'We will analyze the $K$-means algorithm and show that it always
12
+ converge. Let us consider the $K$-means objective function: $$ \mathcal{L}(\mathbf{z},
13
+ \boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2}
14
+ $$ where $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and $\boldsymbol{\mu}_{k}
15
+ \in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1, \ldots, N$. How would you choose
16
+ $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ to minimize $\mathcal{L}(\mathbf{z},
17
+ \boldsymbol{\mu})$ for given $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ ? Compute
18
+ the closed-form formula for the $\boldsymbol{\mu}_{k}$. To which step of the $K$-means
19
+ algorithm does it correspond?'
20
+ sentences:
21
+ - "1. Dynamically scheduled processors have universally more\n physical registers\
22
+ \ than the typical 32 architectural ones and\n they are used for removing\
23
+ \ WARs and WAW (name\n dependencies). In VLIW processors, the same renaming\
24
+ \ must be\n done by the compiler and all registers must be architecturally\n\
25
+ \ visible.\n 2. Also, various techniques essential to improve the\n\
26
+ \ performance of VLIW processors consume more registers (e.g.,\n \
27
+ \ loop unrolling or loop fusion). "
28
+ - O( (f+1)n^2 )b in the binary case, or O( (f+1)n^3 )b in the non-binary case
29
+ - The idea is wrong. Even if the interface remains the same since we are dealing
30
+ with character strings, a decorator does not make sense because the class returning
31
+ JSON cannot be used without this decorator; the logic for extracting the weather
32
+ prediction naturally belongs to the weather client in question. It is therefore
33
+ better to create a class containing both the download of the JSON and the extraction
34
+ of the weather forecast.
35
+ - source_sentence: Estimate the 95% confidence intervals of the geometric mean and
36
+ the arithmetic mean of pageviews using bootstrap resampling. The data is given
37
+ in a pandas.DataFrame called df and the respective column is called "pageviews".
38
+ You can use the scipy.stats python library.
39
+ sentences:
40
+ - (a) PoS tagging, but also Information Retrieval (IR), Text Classification, Information
41
+ Extraction. For the later, accuracy sounds like precision (but it depends on what
42
+ we actually mean by 'task' (vs. subtask)) . (b) a reference must be available,
43
+ 'correct' and 'incorrect' must be clearly defined
44
+ - '[[''break+V => breakable\xa0'', ''derivational''], [''freeze+V =>\xa0frozen\xa0'',
45
+ ''inflectional''], [''translate+V => translation'', ''derivational''], [''cat+N
46
+ => cats'', ''inflectional''], [''modify+V => modifies '', ''inflectional'']]'
47
+ - Dynamically scheduled out-of-order processors.
48
+ - source_sentence: "The data contains information about submissions to a prestigious\
49
+ \ machine learning conference called ICLR. Columns:\nyear, paper, authors, ratings,\
50
+ \ decisions, institution, csranking, categories, authors_citations, authors_publications,\
51
+ \ authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. \n\n\
52
+ Create 3 new fields in the dataframe corresponding to the median value of the\
53
+ \ number of citations per author, the number of publications per author, and the\
54
+ \ h-index per author. So for instance, for the row authors_publications, you will\
55
+ \ create an additional column, e.g. authors_publications_median, containing the\
56
+ \ median number of publications per author in each paper."
57
+ sentences:
58
+ - Consider an deterministic online algorithm $\Alg$ and set $x_1 = W$. There are
59
+ two cases depending on whether \Alg trades the $1$ Euro the first day or not. Suppose
60
+ first that $\Alg$ trades the Euro at day $1$. Then we set $x_2 = W^2$ and so the
61
+ algorithm is only $W/W^2 = 1/W$ competitive. For the other case when \Alg waits
62
+ for the second day, we set $x_2 = 1$. Then \Alg gets $1$ Swiss franc whereas
63
+ optimum would get $W$ and so the algorithm is only $1/W$ competitive again.
64
+ - 'This is an abstraction leak: the notion of JavaScript and even a browser is a
65
+ completely different level of abstraction than users, so this method will likely
66
+ lead to bugs.'
67
+ - 'We could consider at least two approaches here: either binomial confidence interval
68
+ or t-test. • binomial confidence interval: evaluation of a binary classifier (success
69
+ or not) follow a binomial law with parameters (perror,T), where T is the test-set
70
+ size (157 in the above question; is it big enough?). Using normal approximation
71
+ of the binomial law, the width of the confidence interval around estimated error
72
+ probability is q(α)*sqrt(pb*(1-pb)/T),where q(α) is the 1-α quantile (for a 1
73
+ - α confidence level) and pb is the estimation of perror. We here want this confidence
74
+ interval width to be 0.02, and have pb = 0.118 (and ''know'' that q(0.05) = 1.96
75
+ from normal distribution quantile charts); thus we have to solve: (0.02)^2 = (1.96)^2*(0.118*(1-0.118))/T
76
+ Thus T ≃ 1000. • t-test approach: let''s consider estimating their relative behaviour
77
+ on each of the test cases (i.e. each test estimation subset is of size 1). If
78
+ the new system as an error of 0.098 (= 0.118 - 0.02), it can vary from system
79
+ 3 between 0.02 of the test cases (both systems almost always agree but where the
80
+ new system improves the results) and 0.216 of the test cases (the two systems
81
+ never make their errors on the same test case, so they disagree on 0.118 + 0.098
82
+ of the cases). Thus μ of the t-test is between 0.02 and 0.216. And s = 0.004 (by
83
+ assumption, same variance). Thus t is between 5*sqrt(T) and 54*sqrt(T) which is
84
+ already bigger than 1.645 for any T bigger than 1. So this doesn''t help much.
85
+ So all we can say is that if we want to have a (lowest possible) difference of
86
+ 0.02 we should have at least 1/0.02 = 50 test cases ;-) And if we consider that
87
+ we have 0.216 difference, then we have at least 5 test cases... The reason why
88
+ these numbers are so low is simply because we here make strong assumptions about
89
+ the test setup: that it is a paired evaluation. In such a case, having a difference
90
+ (0.02) that is 5 times bigger than the standard deviation is always statistically
91
+ significant at a 95% level.'
92
+ - source_sentence: In order to summarize the degree distribution in a single number,
93
+ would you recommend using the average degree? Why, or why not? If not, what alternatives
94
+ can you think of? Please elaborate!
95
+ sentences:
96
+ - 'inflectional morphology: no change in the grammatical category (e.g. give, given,
97
+ gave, gives ) derivational morphology: change in category (e.g. process, processing,
98
+ processable, processor, processabilty)'
99
+ - "$ \text{Var}[\\wv^\top \\xx] = \frac1N \\sum_{n=1}^N (\\wv^\top \\xx_n)^2$ %\n"
100
+ - '- $E(X) = 0.5$, $Var(X) = 1/12$
101
+
102
+ - $E(Y) = 0.5$, $Var(Y) = 1/12$
103
+
104
+ - $E(Z) = 0.6$, $Var(Z) = 1/24$
105
+
106
+ - $E(K) = 0.6$, $Var(K) = 1/12$'
107
+ - source_sentence: ' The [t-statistic](https://en.wikipedia.org/wiki/T-statistic)
108
+ is the ratio of the departure of the estimated value of a parameter from its hypothesized
109
+ value to its standard error. In a t-test, the higher the t-statistic, the more
110
+ confidently we can reject the null hypothesis. Use `numpy.random` to create four
111
+ samples, each of size 30:
112
+
113
+ - $X \sim Uniform(0,1)$
114
+
115
+ - $Y \sim Uniform(0,1)$
116
+
117
+ - $Z = X/2 + Y/2 + 0.1$
118
+
119
+ - $K = Y + 0.1$'
120
+ sentences:
121
+ - 'The simplest solution to produce the indexing set associated with a document
122
+ is to use a stemmer associated with stop lists allowing to ignore specific non
123
+ content bearing terms. In this case, the indexing set associated with D might
124
+ be:
125
+
126
+
127
+ $I(D)=\{2006$, export, increas, Switzerland, USA $\}$
128
+
129
+
130
+ A more sophisticated approach would consist in using a lemmatizer in which case,
131
+ the indexing set might be:
132
+
133
+
134
+ $I(D)=\left\{2006 \_N U M\right.$, export\_Noun, increase\_Verb, Switzerland\_ProperNoun,
135
+ USA\_ProperNoun\}'
136
+ - Including a major bugfix in a minor release instead of a bugfix release will cause
137
+ an incoherent changelog and an inconvenience for users who wish to only apply
138
+ the patch without any other changes. The bugfix could be as well an urgent security
139
+ fix and should not wait to the next minor release date.
140
+ - 'def get_vocabulary_frequency(documents): """ It parses the input documents
141
+ and creates a dictionary with the terms and term frequencies. INPUT: Doc1:
142
+ hello hello world Doc2: hello friend OUTPUT: {''hello'': 3, ''world'':
143
+ 1, ''friend'': 1} :param documents: list of list of str, with the tokenized
144
+ documents. :return: dict, with keys the words and values the frequency of
145
+ each word. """ vocabulary = dict() for document in documents: for
146
+ word in document: if word in vocabulary: vocabulary[word]
147
+ += 1 else: vocabulary[word] = 1 return vocabulary'
148
+ pipeline_tag: sentence-similarity
149
+ library_name: sentence-transformers
150
+ ---
151
+
152
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
153
+
154
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
155
+
156
+ ## Model Details
157
+
158
+ ### Model Description
159
+ - **Model Type:** Sentence Transformer
160
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
161
+ - **Maximum Sequence Length:** 256 tokens
162
+ - **Output Dimensionality:** 384 dimensions
163
+ - **Similarity Function:** Cosine Similarity
164
+ <!-- - **Training Dataset:** Unknown -->
165
+ <!-- - **Language:** Unknown -->
166
+ <!-- - **License:** Unknown -->
167
+
168
+ ### Model Sources
169
+
170
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
171
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
172
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
173
+
174
+ ### Full Model Architecture
175
+
176
+ ```
177
+ SentenceTransformer(
178
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
179
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
180
+ (2): Normalize()
181
+ )
182
+ ```
183
+
184
+ ## Usage
185
+
186
+ ### Direct Usage (Sentence Transformers)
187
+
188
+ First install the Sentence Transformers library:
189
+
190
+ ```bash
191
+ pip install -U sentence-transformers
192
+ ```
193
+
194
+ Then you can load this model and run inference.
195
+ ```python
196
+ from sentence_transformers import SentenceTransformer
197
+
198
+ # Download from the 🤗 Hub
199
+ model = SentenceTransformer("AShi846/fine-tuned-embedding-model")
200
+ # Run inference
201
+ sentences = [
202
+ ' The [t-statistic](https://en.wikipedia.org/wiki/T-statistic) is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. In a t-test, the higher the t-statistic, the more confidently we can reject the null hypothesis. Use `numpy.random` to create four samples, each of size 30:\n- $X \\sim Uniform(0,1)$\n- $Y \\sim Uniform(0,1)$\n- $Z = X/2 + Y/2 + 0.1$\n- $K = Y + 0.1$',
203
+ 'def get_vocabulary_frequency(documents): """ It parses the input documents and creates a dictionary with the terms and term frequencies. INPUT: Doc1: hello hello world Doc2: hello friend OUTPUT: {\'hello\': 3, \'world\': 1, \'friend\': 1} :param documents: list of list of str, with the tokenized documents. :return: dict, with keys the words and values the frequency of each word. """ vocabulary = dict() for document in documents: for word in document: if word in vocabulary: vocabulary[word] += 1 else: vocabulary[word] = 1 return vocabulary',
204
+ 'Including a major bugfix in a minor release instead of a bugfix release will cause an incoherent changelog and an inconvenience for users who wish to only apply the patch without any other changes. The bugfix could be as well an urgent security fix and should not wait to the next minor release date.',
205
+ ]
206
+ embeddings = model.encode(sentences)
207
+ print(embeddings.shape)
208
+ # [3, 384]
209
+
210
+ # Get the similarity scores for the embeddings
211
+ similarities = model.similarity(embeddings, embeddings)
212
+ print(similarities.shape)
213
+ # [3, 3]
214
+ ```
215
+
216
+ <!--
217
+ ### Direct Usage (Transformers)
218
+
219
+ <details><summary>Click to see the direct usage in Transformers</summary>
220
+
221
+ </details>
222
+ -->
223
+
224
+ <!--
225
+ ### Downstream Usage (Sentence Transformers)
226
+
227
+ You can finetune this model on your own dataset.
228
+
229
+ <details><summary>Click to expand</summary>
230
+
231
+ </details>
232
+ -->
233
+
234
+ <!--
235
+ ### Out-of-Scope Use
236
+
237
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
238
+ -->
239
+
240
+ <!--
241
+ ## Bias, Risks and Limitations
242
+
243
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
244
+ -->
245
+
246
+ <!--
247
+ ### Recommendations
248
+
249
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
250
+ -->
251
+
252
+ ## Training Details
253
+
254
+ ### Training Dataset
255
+
256
+ #### Unnamed Dataset
257
+
258
+ * Size: 475 training samples
259
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
260
+ * Approximate statistics based on the first 475 samples:
261
+ | | sentence_0 | sentence_1 | label |
262
+ |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
263
+ | type | string | string | float |
264
+ | details | <ul><li>min: 5 tokens</li><li>mean: 135.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 110.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.1</li><li>max: 0.1</li></ul> |
265
+ * Samples:
266
+ | sentence_0 | sentence_1 | label |
267
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
268
+ | <code>You have just started your prestigious and important job as the Swiss Cheese Minister. As it turns out, different fondues and raclettes have different nutritional values and different prices: \begin{center} \begin{tabular}{|l|l|l|l||l|} \hline Food & Fondue moitie moitie & Fondue a la tomate & Raclette & Requirement per week \\ \hline Vitamin A [mg/kg] & 35 & 0.5 & 0.5 & 0.5 mg \\ Vitamin B [mg/kg] & 60 & 300 & 0.5 & 15 mg \\ Vitamin C [mg/kg] & 30 & 20 & 70 & 4 mg \\ \hline [price [CHF/kg] & 50 & 75 & 60 & --- \\ \hline \end{tabular} \end{center} Formulate the problem of finding the cheapest combination of the different fondues (moitie moitie \& a la tomate) and Raclette so as to satisfy the weekly nutritional requirement as a linear program.</code> | <code>1. The adjacency graph has ones everywhere except for (i) no<br> edges between exttt{sum} and exttt{i}, and between<br> exttt{sum} and exttt{y\_coord}, and (ii) five on the edge<br> between exttt{x\_coord} and exttt{y\_coord}, and two on the<br> edge between exttt{i} and exttt{y\_coord}.<br> 2. Any of these solution should be optimal either as shown or<br> reversed:<br> <br> - exttt{x\_coord}, exttt{y\_coord}, exttt{i},<br> exttt{j}, exttt{sum}<br> - exttt{j}, exttt{sum}, exttt{x\_coord},<br> exttt{y\_coord}, exttt{i}<br> - exttt{sum}, exttt{j}, exttt{x\_coord},<br> exttt{y\_coord}, exttt{i}<br> 3. Surely, this triad should be adjacent: exttt{x\_coord}, exttt{y\_coord}, exttt{i}.</code> | <code>0.1</code> |
269
+ | <code>Describe the techniques that typical dynamically scheduled<br> processors use to achieve the same purpose of the following features<br> of Intel Itanium: (a) Predicated execution; (b) advanced<br> loads---that is, loads moved before a store and explicit check for<br> RAW hazards; (c) speculative loads---that is, loads moved before a<br> branch and explicit check for exceptions; (d) rotating register<br> file.</code> | <code>Alice and Bob can both apply the AMS sketch with constant precision and failure probability $1/n^2$ to their vectors. Then Charlie subtracts the sketches from each other, obtaining a sketch of the difference. Once the sketch of the difference is available, one can find the special word similarly to the previous problem.</code> | <code>0.1</code> |
270
+ | <code>Design and analyze a polynomial time algorithm for the following problem: \begin{description} \item[INPUT:] An undirected graph $G=(V,E)$. \item[OUTPUT:] A non-negative vertex potential $p(v)\geq 0$ for each vertex $v\in V$ such that \begin{align*} \sum_{v\in S} p(v) \leq |E(S, \bar S)| \quad \mbox{for every $\emptyset \neq S \subsetneq V$ \quad and \quad $\sum_{v\in V} p(v)$ is maximized.} \end{align*} \end{description} {\small (Recall that $E(S, \bar S)$ denotes the set of edges that cross the cut defined by $S$, i.e., $E(S, \bar S) = \{e\in E: |e\cap S| = |e\cap \bar S| = 1\}$.)} \\[1mm] \noindent Hint: formulate the problem as a large linear program (LP) and then show that the LP can be solved in polynomial time. \\[1mm] {\em (In this problem you are asked to (i) design the algorithm, (ii) show that it returns a correct solution and that it runs in polynomial time. Recall that you are allowed to refer to material covered in the course.) }</code> | <code>precision = tp/(tp+fp) recall = tp/(tp+fn) f_measure = 2*precision*recall/(precision+recall) print('F-measure: ', f_measure)</code> | <code>0.1</code> |
271
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
272
+ ```json
273
+ {
274
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
275
+ }
276
+ ```
277
+
278
+ ### Training Hyperparameters
279
+ #### Non-Default Hyperparameters
280
+
281
+ - `per_device_train_batch_size`: 16
282
+ - `per_device_eval_batch_size`: 16
283
+ - `multi_dataset_batch_sampler`: round_robin
284
+
285
+ #### All Hyperparameters
286
+ <details><summary>Click to expand</summary>
287
+
288
+ - `overwrite_output_dir`: False
289
+ - `do_predict`: False
290
+ - `eval_strategy`: no
291
+ - `prediction_loss_only`: True
292
+ - `per_device_train_batch_size`: 16
293
+ - `per_device_eval_batch_size`: 16
294
+ - `per_gpu_train_batch_size`: None
295
+ - `per_gpu_eval_batch_size`: None
296
+ - `gradient_accumulation_steps`: 1
297
+ - `eval_accumulation_steps`: None
298
+ - `torch_empty_cache_steps`: None
299
+ - `learning_rate`: 5e-05
300
+ - `weight_decay`: 0.0
301
+ - `adam_beta1`: 0.9
302
+ - `adam_beta2`: 0.999
303
+ - `adam_epsilon`: 1e-08
304
+ - `max_grad_norm`: 1
305
+ - `num_train_epochs`: 3
306
+ - `max_steps`: -1
307
+ - `lr_scheduler_type`: linear
308
+ - `lr_scheduler_kwargs`: {}
309
+ - `warmup_ratio`: 0.0
310
+ - `warmup_steps`: 0
311
+ - `log_level`: passive
312
+ - `log_level_replica`: warning
313
+ - `log_on_each_node`: True
314
+ - `logging_nan_inf_filter`: True
315
+ - `save_safetensors`: True
316
+ - `save_on_each_node`: False
317
+ - `save_only_model`: False
318
+ - `restore_callback_states_from_checkpoint`: False
319
+ - `no_cuda`: False
320
+ - `use_cpu`: False
321
+ - `use_mps_device`: False
322
+ - `seed`: 42
323
+ - `data_seed`: None
324
+ - `jit_mode_eval`: False
325
+ - `use_ipex`: False
326
+ - `bf16`: False
327
+ - `fp16`: False
328
+ - `fp16_opt_level`: O1
329
+ - `half_precision_backend`: auto
330
+ - `bf16_full_eval`: False
331
+ - `fp16_full_eval`: False
332
+ - `tf32`: None
333
+ - `local_rank`: 0
334
+ - `ddp_backend`: None
335
+ - `tpu_num_cores`: None
336
+ - `tpu_metrics_debug`: False
337
+ - `debug`: []
338
+ - `dataloader_drop_last`: False
339
+ - `dataloader_num_workers`: 0
340
+ - `dataloader_prefetch_factor`: None
341
+ - `past_index`: -1
342
+ - `disable_tqdm`: False
343
+ - `remove_unused_columns`: True
344
+ - `label_names`: None
345
+ - `load_best_model_at_end`: False
346
+ - `ignore_data_skip`: False
347
+ - `fsdp`: []
348
+ - `fsdp_min_num_params`: 0
349
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
350
+ - `tp_size`: 0
351
+ - `fsdp_transformer_layer_cls_to_wrap`: None
352
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
353
+ - `deepspeed`: None
354
+ - `label_smoothing_factor`: 0.0
355
+ - `optim`: adamw_torch
356
+ - `optim_args`: None
357
+ - `adafactor`: False
358
+ - `group_by_length`: False
359
+ - `length_column_name`: length
360
+ - `ddp_find_unused_parameters`: None
361
+ - `ddp_bucket_cap_mb`: None
362
+ - `ddp_broadcast_buffers`: False
363
+ - `dataloader_pin_memory`: True
364
+ - `dataloader_persistent_workers`: False
365
+ - `skip_memory_metrics`: True
366
+ - `use_legacy_prediction_loop`: False
367
+ - `push_to_hub`: False
368
+ - `resume_from_checkpoint`: None
369
+ - `hub_model_id`: None
370
+ - `hub_strategy`: every_save
371
+ - `hub_private_repo`: None
372
+ - `hub_always_push`: False
373
+ - `gradient_checkpointing`: False
374
+ - `gradient_checkpointing_kwargs`: None
375
+ - `include_inputs_for_metrics`: False
376
+ - `include_for_metrics`: []
377
+ - `eval_do_concat_batches`: True
378
+ - `fp16_backend`: auto
379
+ - `push_to_hub_model_id`: None
380
+ - `push_to_hub_organization`: None
381
+ - `mp_parameters`:
382
+ - `auto_find_batch_size`: False
383
+ - `full_determinism`: False
384
+ - `torchdynamo`: None
385
+ - `ray_scope`: last
386
+ - `ddp_timeout`: 1800
387
+ - `torch_compile`: False
388
+ - `torch_compile_backend`: None
389
+ - `torch_compile_mode`: None
390
+ - `include_tokens_per_second`: False
391
+ - `include_num_input_tokens_seen`: False
392
+ - `neftune_noise_alpha`: None
393
+ - `optim_target_modules`: None
394
+ - `batch_eval_metrics`: False
395
+ - `eval_on_start`: False
396
+ - `use_liger_kernel`: False
397
+ - `eval_use_gather_object`: False
398
+ - `average_tokens_across_devices`: False
399
+ - `prompts`: None
400
+ - `batch_sampler`: batch_sampler
401
+ - `multi_dataset_batch_sampler`: round_robin
402
+
403
+ </details>
404
+
405
+ ### Framework Versions
406
+ - Python: 3.12.8
407
+ - Sentence Transformers: 3.4.1
408
+ - Transformers: 4.51.3
409
+ - PyTorch: 2.6.0+cu126
410
+ - Accelerate: 1.3.0
411
+ - Datasets: 3.2.0
412
+ - Tokenizers: 0.21.0
413
+
414
+ ## Citation
415
+
416
+ ### BibTeX
417
+
418
+ #### Sentence Transformers
419
+ ```bibtex
420
+ @inproceedings{reimers-2019-sentence-bert,
421
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
422
+ author = "Reimers, Nils and Gurevych, Iryna",
423
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
424
+ month = "11",
425
+ year = "2019",
426
+ publisher = "Association for Computational Linguistics",
427
+ url = "https://arxiv.org/abs/1908.10084",
428
+ }
429
+ ```
430
+
431
+ <!--
432
+ ## Glossary
433
+
434
+ *Clearly define terms in order to be accessible across audiences.*
435
+ -->
436
+
437
+ <!--
438
+ ## Model Card Authors
439
+
440
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
441
+ -->
442
+
443
+ <!--
444
+ ## Model Card Contact
445
+
446
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
447
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 6,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.51.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.6.0+cu126"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3612e3c54dc40da7459a80e4371423d5f2d4c385c763813cfdf2d71c39bd773c
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 128,
51
+ "model_max_length": 256,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff