Lysandrec commited on
Commit
a571099
·
verified ·
1 Parent(s): e19b61e

Push fine-tuned retriever model (all-MiniLM-L6-v2 base)

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:100000
8
+ - loss:TripletLoss
9
+ base_model: sentence-transformers/all-MiniLM-L6-v2
10
+ widget:
11
+ - source_sentence: 'Consider the set of points $S = \{(x,y) : x \text{ and } y \text{
12
+ are non-negative integers } \leq n\}$. Find the number of squares that can be
13
+ formed with vertices belonging to $S$ and sides parallel to the axes.'
14
+ sentences:
15
+ - <page_title> Waller Plan </page_title> <path> Waller_Plan > City plan </path>
16
+ <section_title> City plan </section_title> <content> The plan also designated
17
+ spaces for a hospital, an academy and university, churches, a courthouse and jail,
18
+ an armory, and a penitentiary.With the surveying and grid plan completed, Waller
19
+ and his associates drew up a plat dividing the city blocks into land lots. The
20
+ first auction of lots was held on August 1, 1839, under a group of live oak trees
21
+ in what was to be the city's southwestern public square; these trees have since
22
+ been known as the "Auction Oaks". The auction raised $182,585 (equivalent to $5,018,000
23
+ in 2022), funds used to pay for the construction of government buildings for the
24
+ new capital city. </content>
25
+ - <page_title> Heilbronn triangle problem </page_title> <path> Heilbronn_triangle_problem
26
+ > Specific shapes and numbers </path> <section_title> Specific shapes and numbers
27
+ </section_title> <content> Goldberg (1972) has investigated the optimal arrangements
28
+ of n {\displaystyle n} points in a square, for n {\displaystyle n} up to 16. Goldberg's
29
+ constructions for up to six points lie on the boundary of the square, and are
30
+ placed to form an affine transformation of the vertices of a regular polygon.
31
+ For larger values of n {\displaystyle n} , Comellas & Yebra (2002) improved Goldberg's
32
+ bounds, and for these values the solutions include points interior to the square.
33
+ These constructions have been proven optimal for up to seven points. </content>
34
+ - '<page_title> 14:9 aspect ratio </page_title> <path> 14:9_aspect_ratio > Mathematics
35
+ </path> <section_title> Mathematics </section_title> <content> The aspect ratio
36
+ of 14:9 (1.555...) is the arithmetic mean (average) of 16:9 and 4:3 (12:9), (
37
+ ( 16 / 9 ) + ( 12 / 9 ) ) ÷ 2 = 14 / 9 {\displaystyle ((16/9)+(12/9))\div 2=14/9}
38
+ . More practically, it is approximately the geometric mean (the precise geometric
39
+ mean is ( 16 / 9 ) × ( 4 / 3 ) ≈ 1.5396 ≈ 13.8: 9 {\displaystyle {\sqrt {(16/9)\times
40
+ (4/3)}}\approx 1.5396\approx 13.8:9} ), and in this sense is mathematically a
41
+ compromise between these two aspect ratios: two equal area pictures (at 16:9 and
42
+ 4:3) will intersect in a box with aspect ratio the geometric mean, as demonstrated
43
+ in the image at top (14:9 is just slightly wider than the intersection). In this
44
+ way 14:9 balances the needs of both 16:9 and 4:3, cropping or distorting both
45
+ about equally. Similar considerations were used in the choice of 16:9 by the SMPTE,
46
+ which balanced 2.35:1 and 4:3. </content>'
47
+ - source_sentence: Solve the equation \(7k^2 + 9k + 3 = d \cdot 7^a\) for positive
48
+ integers \(k\), \(d\), and \(a\), where \(d < 7\).
49
+ sentences:
50
+ - <page_title> Rational square </page_title> <path> Square_numbers > Properties
51
+ </path> <section_title> Properties </section_title> <content> Three squares are
52
+ not sufficient for numbers of the form 4k(8m + 7). A positive integer can be represented
53
+ as a sum of two squares precisely if its prime factorization contains no odd powers
54
+ of primes of the form 4k + 3. This is generalized by Waring's problem. </content>
55
+ - '<page_title> Personally Controlled Electronic Health Record </page_title> <path>
56
+ Personally_Controlled_Electronic_Health_Record > Registration > Healthcare Identifiers
57
+ Service (HI Service) </path> <section_title> Healthcare Identifiers Service (HI
58
+ Service) </section_title> <content> The Healthcare Identifiers Service (HI Service)
59
+ was established by the federal, state and territory governments to create unique
60
+ identifiers for healthcare providers and individuals seeking healthcare. It was
61
+ designed and implemented by Medicare Australia under the control of the NEHTA.
62
+ The HI Service allocates three types of Healthcare Identifiers: Individual healthcare
63
+ identifier (i.e., who received the service) The Individual Healthcare Identifier
64
+ (IHI) is a unique 16 digit reference number that is used to identify individuals
65
+ within the healthcare system. The healthcare provider can retrieve a registered
66
+ patients IHI via the Healthcare Identifier Service by entering in the correct
67
+ name, DOB, and Medicare number which will automatically retrieve the patients
68
+ unique IHI from the system. </content>'
69
+ - '<page_title> Systematic sampling </page_title> <path> Systematic_sampling </path>
70
+ <section_title> Summary </section_title> <content> We want to give unit A a 20%
71
+ probability of selection, unit B a 40% probability, and so on up to unit E (100%).
72
+ Assuming we maintain alphabetical order, we allocate each unit to the following
73
+ interval: A: 0 to 0.2 B: 0.2 to 0.6 (= 0.2 + 0.4) C: 0.6 to 1.2 (= 0.6 + 0.6)
74
+ D: 1.2 to 2.0 (= 1.2 + 0.8) E: 2.0 to 3.0 (= 2.0 + 1.0) If our random start was
75
+ 0.156, we would first select the unit whose interval contains this number (i.e.
76
+ A). Next, we would select the interval containing 1.156 (element C), then 2.156
77
+ (element E). If instead our random start was 0.350, we would select from points
78
+ 0.350 (B), 1.350 (D), and 2.350 (E). </content>'
79
+ - source_sentence: 'Given the linear transformation \( T: \mathbb{R}^3 \rightarrow
80
+ \mathbb{R}^3 \) defined by \( T(x) = A(x) \) where \( A = \begin{pmatrix} 1 &
81
+ 1 & 1 \\ 0 & 1 & 2 \\ 1 & 2 & 2 \end{pmatrix} \), find the inverse transformation
82
+ \( T^{-1}(x) \).'
83
+ sentences:
84
+ - <page_title> Genome sequence </page_title> <path> Genomic_sequence > Eukaryotic
85
+ genomes </path> <section_title> Eukaryotic genomes </section_title> <content>
86
+ In addition to the chromosomes in the nucleus, organelles such as the chloroplasts
87
+ and mitochondria have their own DNA. Mitochondria are sometimes said to have their
88
+ own genome often referred to as the "mitochondrial genome". The DNA found within
89
+ the chloroplast may be referred to as the "plastome". </content>
90
+ - <page_title> Right realism </page_title> <path> Right_realism > Overview > Rational
91
+ choice theory </path> <section_title> Rational choice theory </section_title>
92
+ <content> For example, in 1960 the steering columns of all cars in Germany were
93
+ equipped with locks and the result was a 60 per cent reduction in car thefts.
94
+ Whereas, in Great Britain only new cars were so equipped with the result being
95
+ crime was displaced to the older unequipped cars. However, no evidence exists
96
+ to suggest that an obscene phone caller will begin a career as a burglar. In response,
97
+ Akers (1990) says that rational choice theorists make so many exceptions to the
98
+ pure rationality stressed in their own models that nothing sets them apart from
99
+ other theorists. Further, the rational choice models in literature have various
100
+ situational or cognitive constraints and deterministic notions of cause and effect
101
+ that render them, "...indistinguishable from current 'etiological' or 'positivist'
102
+ theories." </content>
103
+ - '<page_title> Elementary row operations </page_title> <path> Row_operations >
104
+ Elementary row operations > Row-switching transformations > Properties </path>
105
+ <section_title> Properties </section_title> <content> The inverse of this matrix
106
+ is itself: T i , j − 1 = T i , j . {\displaystyle T_{i,j}^{-1}=T_{i,j}.} Since
107
+ the determinant of the identity matrix is unity, det ( T i , j ) = − 1. {\displaystyle
108
+ \det(T_{i,j})=-1.} </content>'
109
+ - source_sentence: 'If |x - 5| = 23 what is the sum of all the values of x.
110
+
111
+ A. A)46
112
+
113
+ B. B)10
114
+
115
+ C. C)56
116
+
117
+ D. D)-46
118
+
119
+ E. E)28'
120
+ sentences:
121
+ - <page_title> BMX racing </page_title> <path> BMX_racing > General rules of advancement
122
+ in organized BMX racing > Professionals </path> <section_title> Professionals
123
+ </section_title> <content> For example, if a rider participates in 13 national
124
+ events, their best 10 will be considered and their worst three disregarded. This
125
+ qualification must be met on the national level to wear National numbers one through
126
+ ten on the number plate the following year. </content>
127
+ - <page_title> Construction of the real numbers </page_title> <path> Constructions_of_real_numbers
128
+ > Axiomatic definitions > Axioms > On models </path> <section_title> On models
129
+ </section_title> <content> f(x +ℝ y) = f(x) +S f(y) and f(x ×ℝ y) = f(x) ×S f(y),
130
+ for all x and y in R . {\displaystyle \mathbb {R} .} x ≤ℝ y if and only if f(x)
131
+ ≤S f(y), for all x and y in R . {\displaystyle \mathbb {R} .} </content>
132
+ - <page_title> Rod calculus </page_title> <path> Rod_calculus > Subtraction > Without
133
+ borrowing </path> <section_title> Without borrowing </section_title> <content>
134
+ In situation in which no borrowing is needed, one only needs to take the number
135
+ of rods in the subtrahend from the minuend. The result of the calculation is the
136
+ difference. The adjacent image shows the steps in subtracting 23 from 54. </content>
137
+ - source_sentence: For some constant $b$, if the minimum value of \[f(x)=\dfrac{x^2-2x+b}{x^2+2x+b}\]
138
+ is $\tfrac12$, what is the maximum value of $f(x)$?
139
+ sentences:
140
+ - <page_title> Lagrangian multiplier </page_title> <path> Lagrange_multiplier >
141
+ Examples > Example 1 </path> <section_title> Example 1 </section_title> <content>
142
+ Evaluating the objective function f at these points yields f ( 2 2 , 2 2 ) = 2
143
+ , f ( − 2 2 , − 2 2 ) = − 2 . {\displaystyle f\left({\tfrac {\sqrt {2\ }}{2}},{\tfrac
144
+ {\sqrt {2\ }}{2}}\right)={\sqrt {2\ }}\ ,\qquad f\left(-{\tfrac {\sqrt {2\ }}{2}},-{\tfrac
145
+ {\sqrt {2\ }}{2}}\right)=-{\sqrt {2\ }}~.} Thus the constrained maximum is 2 {\displaystyle
146
+ \ {\sqrt {2\ }}\ } and the constrained minimum is − 2 {\displaystyle -{\sqrt {2}}}
147
+ . </content>
148
+ - '<page_title> Second degree polynomial </page_title> <path> Quadratic_function
149
+ > Graph of the univariate function > Vertex > Maximum and minimum points </path>
150
+ <section_title> Maximum and minimum points </section_title> <content> Using calculus,
151
+ the vertex point, being a maximum or minimum of the function, can be obtained
152
+ by finding the roots of the derivative: f ( x ) = a x 2 + b x + c ⇒ f ′ ( x )
153
+ = 2 a x + b {\displaystyle f(x)=ax^{2}+bx+c\quad \Rightarrow \quad f''(x)=2ax+b}
154
+ x is a root of f ''(x) if f ''(x) = 0 resulting in x = − b 2 a {\displaystyle
155
+ x=-{\frac {b}{2a}}} with the corresponding function value f ( x ) = a ( − b 2
156
+ a ) 2 + b ( − b 2 a ) + c = c − b 2 4 a , {\displaystyle f(x)=a\left(-{\frac {b}{2a}}\right)^{2}+b\left(-{\frac
157
+ {b}{2a}}\right)+c=c-{\frac {b^{2}}{4a}},} so again the vertex point coordinates,
158
+ (h, k), can be expressed as ( − b 2 a , c − b 2 4 a ) . {\displaystyle \left(-{\frac
159
+ {b}{2a}},c-{\frac {b^{2}}{4a}}\right).} </content>'
160
+ - '<page_title> Dimer model </page_title> <path> Domino_tiling > Counting tilings
161
+ of regions </path> <section_title> Counting tilings of regions </section_title>
162
+ <content> The number of ways to cover an m × n {\displaystyle m\times n} rectangle
163
+ with m n 2 {\displaystyle {\frac {mn}{2}}} dominoes, calculated independently
164
+ by Temperley & Fisher (1961) and Kasteleyn (1961), is given by (sequence A099390
165
+ in the OEIS) When both m and n are odd, the formula correctly reduces to zero
166
+ possible domino tilings. A special case occurs when tiling the 2 × n {\displaystyle
167
+ 2\times n} rectangle with n dominoes: the sequence reduces to the Fibonacci sequence.Another
168
+ special case happens for squares with m = n = 0, 2, 4, 6, 8, 10, 12, ... is These
169
+ numbers can be found by writing them as the Pfaffian of an m n × m n {\displaystyle
170
+ mn\times mn} skew-symmetric matrix whose eigenvalues can be found explicitly.
171
+ This technique may be applied in many mathematics-related subjects, for example,
172
+ in the classical, 2-dimensional computation of the dimer-dimer correlator function
173
+ in statistical mechanics. The number of tilings of a region is very sensitive
174
+ to boundary conditions, and can change dramatically with apparently insignificant
175
+ changes in the shape of the region. This is illustrated by the number of tilings
176
+ of an Aztec diamond of order n, where the number of tilings is 2(n + 1)n/2. If
177
+ this is replaced by the "augmented Aztec diamond" of order n with 3 long rows
178
+ in the middle rather than 2, the number of tilings drops to the much smaller number
179
+ D(n,n), a Delannoy number, which has only exponential rather than super-exponential
180
+ growth in n. For the "reduced Aztec diamond" of order n with only one long middle
181
+ row, there is only one tiling. </content>'
182
+ pipeline_tag: sentence-similarity
183
+ library_name: sentence-transformers
184
+ ---
185
+
186
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
187
+
188
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
189
+
190
+ ## Model Details
191
+
192
+ ### Model Description
193
+ - **Model Type:** Sentence Transformer
194
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
195
+ - **Maximum Sequence Length:** 384 tokens
196
+ - **Output Dimensionality:** 384 dimensions
197
+ - **Similarity Function:** Cosine Similarity
198
+ <!-- - **Training Dataset:** Unknown -->
199
+ <!-- - **Language:** Unknown -->
200
+ <!-- - **License:** Unknown -->
201
+
202
+ ### Model Sources
203
+
204
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
205
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
206
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
207
+
208
+ ### Full Model Architecture
209
+
210
+ ```
211
+ SentenceTransformer(
212
+ (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel
213
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
214
+ (2): Normalize()
215
+ )
216
+ ```
217
+
218
+ ## Usage
219
+
220
+ ### Direct Usage (Sentence Transformers)
221
+
222
+ First install the Sentence Transformers library:
223
+
224
+ ```bash
225
+ pip install -U sentence-transformers
226
+ ```
227
+
228
+ Then you can load this model and run inference.
229
+ ```python
230
+ from sentence_transformers import SentenceTransformer
231
+
232
+ # Download from the 🤗 Hub
233
+ model = SentenceTransformer("Lysandrec/MNLP_M2_document_encoder")
234
+ # Run inference
235
+ sentences = [
236
+ 'For some constant $b$, if the minimum value of \\[f(x)=\\dfrac{x^2-2x+b}{x^2+2x+b}\\] is $\\tfrac12$, what is the maximum value of $f(x)$?',
237
+ "<page_title> Second degree polynomial </page_title> <path> Quadratic_function > Graph of the univariate function > Vertex > Maximum and minimum points </path> <section_title> Maximum and minimum points </section_title> <content> Using calculus, the vertex point, being a maximum or minimum of the function, can be obtained by finding the roots of the derivative: f ( x ) = a x 2 + b x + c ⇒ f ′ ( x ) = 2 a x + b {\\displaystyle f(x)=ax^{2}+bx+c\\quad \\Rightarrow \\quad f'(x)=2ax+b} x is a root of f '(x) if f '(x) = 0 resulting in x = − b 2 a {\\displaystyle x=-{\\frac {b}{2a}}} with the corresponding function value f ( x ) = a ( − b 2 a ) 2 + b ( − b 2 a ) + c = c − b 2 4 a , {\\displaystyle f(x)=a\\left(-{\\frac {b}{2a}}\\right)^{2}+b\\left(-{\\frac {b}{2a}}\\right)+c=c-{\\frac {b^{2}}{4a}},} so again the vertex point coordinates, (h, k), can be expressed as ( − b 2 a , c − b 2 4 a ) . {\\displaystyle \\left(-{\\frac {b}{2a}},c-{\\frac {b^{2}}{4a}}\\right).} </content>",
238
+ '<page_title> Lagrangian multiplier </page_title> <path> Lagrange_multiplier > Examples > Example 1 </path> <section_title> Example 1 </section_title> <content> Evaluating the objective function f at these points yields f ( 2 2 , 2 2 ) = 2 , f ( − 2 2 , − 2 2 ) = − 2 . {\\displaystyle f\\left({\\tfrac {\\sqrt {2\\ }}{2}},{\\tfrac {\\sqrt {2\\ }}{2}}\\right)={\\sqrt {2\\ }}\\ ,\\qquad f\\left(-{\\tfrac {\\sqrt {2\\ }}{2}},-{\\tfrac {\\sqrt {2\\ }}{2}}\\right)=-{\\sqrt {2\\ }}~.} Thus the constrained maximum is 2 {\\displaystyle \\ {\\sqrt {2\\ }}\\ } and the constrained minimum is − 2 {\\displaystyle -{\\sqrt {2}}} . </content>',
239
+ ]
240
+ embeddings = model.encode(sentences)
241
+ print(embeddings.shape)
242
+ # [3, 384]
243
+
244
+ # Get the similarity scores for the embeddings
245
+ similarities = model.similarity(embeddings, embeddings)
246
+ print(similarities.shape)
247
+ # [3, 3]
248
+ ```
249
+
250
+ <!--
251
+ ### Direct Usage (Transformers)
252
+
253
+ <details><summary>Click to see the direct usage in Transformers</summary>
254
+
255
+ </details>
256
+ -->
257
+
258
+ <!--
259
+ ### Downstream Usage (Sentence Transformers)
260
+
261
+ You can finetune this model on your own dataset.
262
+
263
+ <details><summary>Click to expand</summary>
264
+
265
+ </details>
266
+ -->
267
+
268
+ <!--
269
+ ### Out-of-Scope Use
270
+
271
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
272
+ -->
273
+
274
+ <!--
275
+ ## Bias, Risks and Limitations
276
+
277
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
278
+ -->
279
+
280
+ <!--
281
+ ### Recommendations
282
+
283
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
284
+ -->
285
+
286
+ ## Training Details
287
+
288
+ ### Training Dataset
289
+
290
+ #### Unnamed Dataset
291
+
292
+ * Size: 100,000 training samples
293
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
294
+ * Approximate statistics based on the first 1000 samples:
295
+ | | sentence_0 | sentence_1 | sentence_2 |
296
+ |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
297
+ | type | string | string | string |
298
+ | details | <ul><li>min: 13 tokens</li><li>mean: 64.12 tokens</li><li>max: 214 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 205.95 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 63 tokens</li><li>mean: 178.5 tokens</li><li>max: 384 tokens</li></ul> |
299
+ * Samples:
300
+ | sentence_0 | sentence_1 | sentence_2 |
301
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
302
+ | <code>The average of first five prime numbers greater than 61 is?<br>A. A)32.2<br>B. B)32.98<br>C. C)74.6<br>D. D)32.8<br>E. E)32.4</code> | <code><page_title> 61 (number) </page_title> <path> 61_(number) > In mathematics </path> <section_title> In mathematics </section_title> <content> 61 is: the 18th prime number. a twin prime with 59. a cuban prime of the form p = x3 − y3/x − y, where x = y + 1. the smallest proper prime, a prime p which ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repeating sequence with length p − 1. In such primes, each digit 0, 1, ..., 9 appears in the repeating sequence the same number of times as does each other digit (namely, p − 1/10 times). </content></code> | <code><page_title> Astatine </page_title> <path> Element_85 > Characteristics > Chemical </path> <section_title> Chemical </section_title> <content> In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008). </content></code> |
303
+ | <code>A charitable association sold an average of 66 raffle tickets per member. Among the female members, the average was 70 raffle tickets. The male to female ratio of the association is 1:2. What was the average number E of tickets sold by the male members of the association<br>A. A)50<br>B. B)56<br>C. C)58<br>D. D)62<br>E. E)66</code> | <code><page_title> RSA number </page_title> <path> RSA_numbers </path> <section_title> Summary </section_title> <content> Cash prizes of varying size, up to US$200,000 (and prizes up to $20,000 awarded), were offered for factorization of some of them. The smallest RSA number was factored in a few days. Most of the numbers have still not been factored and many of them are expected to remain unfactored for many years to come. </content></code> | <code><page_title> Peer learning </page_title> <path> Peer_learning > Connections with other practices > Connectivism </path> <section_title> Connectivism </section_title> <content> Yochai Benkler explains how the now-ubiquitous computer helps us produce and process knowledge together with others in his book, The Wealth of Networks. George Siemens argues in Connectivism: A Learning Theory for the Digital Age, that technology has changed the way we learn, explaining how it tends to complicate or expose the limitations of the learning theories of the past. In practice, the ideas of connectivism developed in and alongside the then-new social formation, "massive open online courses" or MOOCs. Connectivism proposes that the knowledge we can access by virtue of our connections with others is just as valuable as the information carried inside our minds. </content></code> |
304
+ | <code>Find prime numbers \(a, b, c, d, e\) such that \(a^4 + b^4 + c^4 + d^4 + e^4 = abcde\).</code> | <code><page_title> Pythagorean triangle </page_title> <path> Primitive_Pythagorean_triple > Special cases and related equations > The Jacobi–Madden equation </path> <section_title> The Jacobi–Madden equation </section_title> <content> The equation, a 4 + b 4 + c 4 + d 4 = ( a + b + c + d ) 4 {\displaystyle a^{4}+b^{4}+c^{4}+d^{4}=(a+b+c+d)^{4}} is equivalent to the special Pythagorean triple, ( a 2 + a b + b 2 ) 2 + ( c 2 + c d + d 2 ) 2 = ( ( a + b ) 2 + ( a + b ) ( c + d ) + ( c + d ) 2 ) 2 {\displaystyle (a^{2}+ab+b^{2})^{2}+(c^{2}+cd+d^{2})^{2}=((a+b)^{2}+(a+b)(c+d)+(c+d)^{2})^{2}} There is an infinite number of solutions to this equation as solving for the variables involves an elliptic curve. Small ones are, a , b , c , d = − 2634 , 955 , 1770 , 5400 {\displaystyle a,b,c,d=-2634,955,1770,5400} a , b , c , d = − 31764 , 7590 , 27385 , 48150 {\displaystyle a,b,c,d=-31764,7590,27385,48150} </content></code> | <code><page_title> Pythagorean triple </page_title> <path> Pythagorean_triples > Special cases and related equations > Descartes' Circle Theorem </path> <section_title> Descartes' Circle Theorem </section_title> <content> For the case of Descartes' circle theorem where all variables are squares, 2 ( a 4 + b 4 + c 4 + d 4 ) = ( a 2 + b 2 + c 2 + d 2 ) 2 {\displaystyle 2(a^{4}+b^{4}+c^{4}+d^{4})=(a^{2}+b^{2}+c^{2}+d^{2})^{2}} Euler showed this is equivalent to three simultaneous Pythagorean triples, ( 2 a b ) 2 + ( 2 c d ) 2 = ( a 2 + b 2 − c 2 − d 2 ) 2 {\displaystyle (2ab)^{2}+(2cd)^{2}=(a^{2}+b^{2}-c^{2}-d^{2})^{2}} ( 2 a c ) 2 + ( 2 b d ) 2 = ( a 2 − b 2 + c 2 − d 2 ) 2 {\displaystyle (2ac)^{2}+(2bd)^{2}=(a^{2}-b^{2}+c^{2}-d^{2})^{2}} ( 2 a d ) 2 + ( 2 b c ) 2 = ( a 2 − b 2 − c 2 + d 2 ) 2 {\displaystyle (2ad)^{2}+(2bc)^{2}=(a^{2}-b^{2}-c^{2}+d^{2})^{2}} There is also an infinite number of solutions, and for the special case when a + b = c {\displaystyle a+b=c} , then the equation simplifi...</code> |
305
+ * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
306
+ ```json
307
+ {
308
+ "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
309
+ "triplet_margin": 5
310
+ }
311
+ ```
312
+
313
+ ### Training Hyperparameters
314
+ #### Non-Default Hyperparameters
315
+
316
+ - `per_device_train_batch_size`: 64
317
+ - `per_device_eval_batch_size`: 64
318
+ - `num_train_epochs`: 1
319
+ - `multi_dataset_batch_sampler`: round_robin
320
+
321
+ #### All Hyperparameters
322
+ <details><summary>Click to expand</summary>
323
+
324
+ - `overwrite_output_dir`: False
325
+ - `do_predict`: False
326
+ - `eval_strategy`: no
327
+ - `prediction_loss_only`: True
328
+ - `per_device_train_batch_size`: 64
329
+ - `per_device_eval_batch_size`: 64
330
+ - `per_gpu_train_batch_size`: None
331
+ - `per_gpu_eval_batch_size`: None
332
+ - `gradient_accumulation_steps`: 1
333
+ - `eval_accumulation_steps`: None
334
+ - `torch_empty_cache_steps`: None
335
+ - `learning_rate`: 5e-05
336
+ - `weight_decay`: 0.0
337
+ - `adam_beta1`: 0.9
338
+ - `adam_beta2`: 0.999
339
+ - `adam_epsilon`: 1e-08
340
+ - `max_grad_norm`: 1
341
+ - `num_train_epochs`: 1
342
+ - `max_steps`: -1
343
+ - `lr_scheduler_type`: linear
344
+ - `lr_scheduler_kwargs`: {}
345
+ - `warmup_ratio`: 0.0
346
+ - `warmup_steps`: 0
347
+ - `log_level`: passive
348
+ - `log_level_replica`: warning
349
+ - `log_on_each_node`: True
350
+ - `logging_nan_inf_filter`: True
351
+ - `save_safetensors`: True
352
+ - `save_on_each_node`: False
353
+ - `save_only_model`: False
354
+ - `restore_callback_states_from_checkpoint`: False
355
+ - `no_cuda`: False
356
+ - `use_cpu`: False
357
+ - `use_mps_device`: False
358
+ - `seed`: 42
359
+ - `data_seed`: None
360
+ - `jit_mode_eval`: False
361
+ - `use_ipex`: False
362
+ - `bf16`: False
363
+ - `fp16`: False
364
+ - `fp16_opt_level`: O1
365
+ - `half_precision_backend`: auto
366
+ - `bf16_full_eval`: False
367
+ - `fp16_full_eval`: False
368
+ - `tf32`: None
369
+ - `local_rank`: 0
370
+ - `ddp_backend`: None
371
+ - `tpu_num_cores`: None
372
+ - `tpu_metrics_debug`: False
373
+ - `debug`: []
374
+ - `dataloader_drop_last`: False
375
+ - `dataloader_num_workers`: 0
376
+ - `dataloader_prefetch_factor`: None
377
+ - `past_index`: -1
378
+ - `disable_tqdm`: False
379
+ - `remove_unused_columns`: True
380
+ - `label_names`: None
381
+ - `load_best_model_at_end`: False
382
+ - `ignore_data_skip`: False
383
+ - `fsdp`: []
384
+ - `fsdp_min_num_params`: 0
385
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
386
+ - `tp_size`: 0
387
+ - `fsdp_transformer_layer_cls_to_wrap`: None
388
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
389
+ - `deepspeed`: None
390
+ - `label_smoothing_factor`: 0.0
391
+ - `optim`: adamw_torch
392
+ - `optim_args`: None
393
+ - `adafactor`: False
394
+ - `group_by_length`: False
395
+ - `length_column_name`: length
396
+ - `ddp_find_unused_parameters`: None
397
+ - `ddp_bucket_cap_mb`: None
398
+ - `ddp_broadcast_buffers`: False
399
+ - `dataloader_pin_memory`: True
400
+ - `dataloader_persistent_workers`: False
401
+ - `skip_memory_metrics`: True
402
+ - `use_legacy_prediction_loop`: False
403
+ - `push_to_hub`: False
404
+ - `resume_from_checkpoint`: None
405
+ - `hub_model_id`: None
406
+ - `hub_strategy`: every_save
407
+ - `hub_private_repo`: None
408
+ - `hub_always_push`: False
409
+ - `gradient_checkpointing`: False
410
+ - `gradient_checkpointing_kwargs`: None
411
+ - `include_inputs_for_metrics`: False
412
+ - `include_for_metrics`: []
413
+ - `eval_do_concat_batches`: True
414
+ - `fp16_backend`: auto
415
+ - `push_to_hub_model_id`: None
416
+ - `push_to_hub_organization`: None
417
+ - `mp_parameters`:
418
+ - `auto_find_batch_size`: False
419
+ - `full_determinism`: False
420
+ - `torchdynamo`: None
421
+ - `ray_scope`: last
422
+ - `ddp_timeout`: 1800
423
+ - `torch_compile`: False
424
+ - `torch_compile_backend`: None
425
+ - `torch_compile_mode`: None
426
+ - `include_tokens_per_second`: False
427
+ - `include_num_input_tokens_seen`: False
428
+ - `neftune_noise_alpha`: None
429
+ - `optim_target_modules`: None
430
+ - `batch_eval_metrics`: False
431
+ - `eval_on_start`: False
432
+ - `use_liger_kernel`: False
433
+ - `eval_use_gather_object`: False
434
+ - `average_tokens_across_devices`: False
435
+ - `prompts`: None
436
+ - `batch_sampler`: batch_sampler
437
+ - `multi_dataset_batch_sampler`: round_robin
438
+
439
+ </details>
440
+
441
+ ### Training Logs
442
+ | Epoch | Step | Training Loss |
443
+ |:------:|:----:|:-------------:|
444
+ | 0.3199 | 500 | 4.0855 |
445
+ | 0.6398 | 1000 | 3.9274 |
446
+ | 0.9597 | 1500 | 3.9199 |
447
+
448
+
449
+ ### Framework Versions
450
+ - Python: 3.12.8
451
+ - Sentence Transformers: 3.4.1
452
+ - Transformers: 4.51.3
453
+ - PyTorch: 2.5.1+cu124
454
+ - Accelerate: 1.3.0
455
+ - Datasets: 3.2.0
456
+ - Tokenizers: 0.21.0
457
+
458
+ ## Citation
459
+
460
+ ### BibTeX
461
+
462
+ #### Sentence Transformers
463
+ ```bibtex
464
+ @inproceedings{reimers-2019-sentence-bert,
465
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
466
+ author = "Reimers, Nils and Gurevych, Iryna",
467
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
468
+ month = "11",
469
+ year = "2019",
470
+ publisher = "Association for Computational Linguistics",
471
+ url = "https://arxiv.org/abs/1908.10084",
472
+ }
473
+ ```
474
+
475
+ #### TripletLoss
476
+ ```bibtex
477
+ @misc{hermans2017defense,
478
+ title={In Defense of the Triplet Loss for Person Re-Identification},
479
+ author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
480
+ year={2017},
481
+ eprint={1703.07737},
482
+ archivePrefix={arXiv},
483
+ primaryClass={cs.CV}
484
+ }
485
+ ```
486
+
487
+ <!--
488
+ ## Glossary
489
+
490
+ *Clearly define terms in order to be accessible across audiences.*
491
+ -->
492
+
493
+ <!--
494
+ ## Model Card Authors
495
+
496
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
497
+ -->
498
+
499
+ <!--
500
+ ## Model Card Contact
501
+
502
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
503
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 6,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.52.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.52.2",
5
+ "pytorch": "2.7.0"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1c9378ff83690dc2283e8030b03e1d33e55195033f70b506b389bf044f43ad6
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 384,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 128,
51
+ "model_max_length": 384,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff