blachang28 commited on
Commit
7c241b9
·
verified ·
1 Parent(s): 0f8fb33

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_Dense/config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "in_features": 768,
3
+ "out_features": 3072,
4
+ "bias": false,
5
+ "activation_function": "torch.nn.modules.linear.Identity"
6
+ }
2_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ae32925ccef358b15b16f54cd9481dfa3393f255bad37bf4a4af12bb1769d12
3
+ size 9437272
3_Dense/config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "in_features": 3072,
3
+ "out_features": 768,
4
+ "bias": false,
5
+ "activation_function": "torch.nn.modules.linear.Identity"
6
+ }
3_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6adac5248141551dc977a58b3737762d29e5abcd98f44e9833706e721bd2d07
3
+ size 9437272
README.md ADDED
@@ -0,0 +1,452 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:200
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: google/embeddinggemma-300m
11
+ widget:
12
+ - source_sentence: Alice and Bob play a game involving a circle whose circumference
13
+ is divided by 12 equally-spaced points. The points are numbered clockwise, from
14
+ 1 to 12. Both start on point 12. Alice moves clockwise and Bob, counterclockwise.
15
+ In a turn of the game, Alice moves 5 points clockwise and Bob moves 9 points counterclockwise.
16
+ The game ends when they stop on the same point. How many turns will this take?
17
+ sentences:
18
+ - How many distinct triangles can be drawn using three of the dots below as vertices?
19
+ [asy]dot(origin^^(1,0)^^(2,0)^^(0,1)^^(1,1)^^(2,1));[/asy]
20
+ - Tess runs counterclockwise around rectangular block $JKLM$. She lives at corner
21
+ $J$. Which graph could represent her straight-line distance from home?
22
+ - Which triplet of numbers has a sum NOT equal to 1?
23
+ - source_sentence: Which of these numbers is less than its reciprocal?
24
+ sentences:
25
+ - A white cylindrical silo has a diameter of 30 feet and a height of 80 feet. A
26
+ red stripe with a horizontal width of 3 feet is painted on the silo, as shown,
27
+ making two complete revolutions around it. What is the area of the stripe in square
28
+ feet? [asy] size(250);defaultpen(linewidth(0.8)); draw(ellipse(origin, 3, 1));
29
+ fill((3,0)--(3,2)--(-3,2)--(-3,0)--cycle, white); draw((3,0)--(3,16)^^(-3,0)--(-3,16));
30
+ draw((0, 15)--(3, 12)^^(0, 16)--(3, 13)); filldraw(ellipse((0, 16), 3, 1), white,
31
+ black); draw((-3,11)--(3, 5)^^(-3,10)--(3, 4)); draw((-3,2)--(0,-1)^^(-3,1)--(-1,-0.89));
32
+ draw((0,-1)--(0,15), dashed); draw((3,-2)--(3,-4)^^(-3,-2)--(-3,-4)); draw((-7,0)--(-5,0)^^(-7,16)--(-5,16));
33
+ draw((3,-3)--(-3,-3), Arrows(6)); draw((-6,0)--(-6,16), Arrows(6)); draw((-2,9)--(-1,9),
34
+ Arrows(3)); label("$3$", (-1.375,9.05), dir(260), fontsize(7)); label("$A$", (0,15),
35
+ N); label("$B$", (0,-1), NE); label("$30$", (0, -3), S); label("$80$", (-6, 8),
36
+ W);[/asy]
37
+ - Aunt Anna is $42$ years old. Caitlin is $5$ years younger than Brianna, and Brianna
38
+ is half as old as Aunt Anna. How old is Caitlin?
39
+ - Tess runs counterclockwise around rectangular block $JKLM$. She lives at corner
40
+ $J$. Which graph could represent her straight-line distance from home?
41
+ - source_sentence: 'Problems 8,9 and 10 use the data found in the accompanying paragraph
42
+ and table: Juan organizes the stamps in his collection by country and by the decade
43
+ in which they were issued. The prices he paid for them at a stamp shop were: Brazil
44
+ and France, 6 cents each, Peru 4 cents each, and Spain 5 cents each. (Brazil and
45
+ Peru are South American countries and France and Spain are in Europe.) [asy] /*
46
+ AMC8 2002 #8, 9, 10 Problem */ size(3inch, 1.5inch); for ( int y = 0; y <= 5;
47
+ ++y ) { draw((0,y)--(18,y)); } draw((0,0)--(0,5)); draw((6,0)--(6,5)); draw((9,0)--(9,5));
48
+ draw((12,0)--(12,5)); draw((15,0)--(15,5)); draw((18,0)--(18,5)); draw(scale(0.8)*"50s",
49
+ (7.5,4.5)); draw(scale(0.8)*"4", (7.5,3.5)); draw(scale(0.8)*"8", (7.5,2.5));
50
+ draw(scale(0.8)*"6", (7.5,1.5)); draw(scale(0.8)*"3", (7.5,0.5)); draw(scale(0.8)*"60s",
51
+ (10.5,4.5)); draw(scale(0.8)*"7", (10.5,3.5)); draw(scale(0.8)*"4", (10.5,2.5));
52
+ draw(scale(0.8)*"4", (10.5,1.5)); draw(scale(0.8)*"9", (10.5,0.5)); draw(scale(0.8)*"70s",
53
+ (13.5,4.5)); draw(scale(0.8)*"12", (13.5,3.5)); draw(scale(0.8)*"12", (13.5,2.5));
54
+ draw(scale(0.8)*"6", (13.5,1.5)); draw(scale(0.8)*"13", (13.5,0.5)); draw(scale(0.8)*"80s",
55
+ (16.5,4.5)); draw(scale(0.8)*"8", (16.5,3.5)); draw(scale(0.8)*"15", (16.5,2.5));
56
+ draw(scale(0.8)*"10", (16.5,1.5)); draw(scale(0.8)*"9", (16.5,0.5)); label(scale(0.8)*"Country",
57
+ (3,4.5)); label(scale(0.8)*"Brazil", (3,3.5)); label(scale(0.8)*"France", (3,2.5));
58
+ label(scale(0.8)*"Peru", (3,1.5)); label(scale(0.8)*"Spain", (3,0.5)); label(scale(0.9)*"Juan''s
59
+ Stamp Collection", (9,0), S); label(scale(0.9)*"Number of Stamps by Decade", (9,5),
60
+ N);[/asy] The average price of his ''70s stamps is closest to'
61
+ sentences:
62
+ - A rectangular garden 60 feet long and 20 feet wide is enclosed by a fence. To
63
+ make the garden larger, while using the same fence, its shape is changed to a
64
+ square. By how many square feet does this enlarge the garden?
65
+ - 'Problems 8,9 and 10 use the data found in the accompanying paragraph and table:
66
+ Juan organizes the stamps in his collection by country and by the decade in which
67
+ they were issued. The prices he paid for them at a stamp shop were: Brazil and
68
+ France, 6 cents each, Peru 4 cents each, and Spain 5 cents each. (Brazil and Peru
69
+ are South American countries and France and Spain are in Europe.) [asy] /* AMC8
70
+ 2002 #8, 9, 10 Problem */ size(3inch, 1.5inch); for ( int y = 0; y <= 5; ++y )
71
+ { draw((0,y)--(18,y)); } draw((0,0)--(0,5)); draw((6,0)--(6,5)); draw((9,0)--(9,5));
72
+ draw((12,0)--(12,5)); draw((15,0)--(15,5)); draw((18,0)--(18,5)); draw(scale(0.8)*"50s",
73
+ (7.5,4.5)); draw(scale(0.8)*"4", (7.5,3.5)); draw(scale(0.8)*"8", (7.5,2.5));
74
+ draw(scale(0.8)*"6", (7.5,1.5)); draw(scale(0.8)*"3", (7.5,0.5)); draw(scale(0.8)*"60s",
75
+ (10.5,4.5)); draw(scale(0.8)*"7", (10.5,3.5)); draw(scale(0.8)*"4", (10.5,2.5));
76
+ draw(scale(0.8)*"4", (10.5,1.5)); draw(scale(0.8)*"9", (10.5,0.5)); draw(scale(0.8)*"70s",
77
+ (13.5,4.5)); draw(scale(0.8)*"12", (13.5,3.5)); draw(scale(0.8)*"12", (13.5,2.5));
78
+ draw(scale(0.8)*"6", (13.5,1.5)); draw(scale(0.8)*"13", (13.5,0.5)); draw(scale(0.8)*"80s",
79
+ (16.5,4.5)); draw(scale(0.8)*"8", (16.5,3.5)); draw(scale(0.8)*"15", (16.5,2.5));
80
+ draw(scale(0.8)*"10", (16.5,1.5)); draw(scale(0.8)*"9", (16.5,0.5)); label(scale(0.8)*"Country",
81
+ (3,4.5)); label(scale(0.8)*"Brazil", (3,3.5)); label(scale(0.8)*"France", (3,2.5));
82
+ label(scale(0.8)*"Peru", (3,1.5)); label(scale(0.8)*"Spain", (3,0.5)); label(scale(0.9)*"Juan''s
83
+ Stamp Collection", (9,0), S); label(scale(0.9)*"Number of Stamps by Decade", (9,5),
84
+ N);[/asy] In dollars and cents, how much did his South American stamps issued
85
+ before the ’70s cost him?'
86
+ - Three mutually tangent spheres of radius $1$ rest on a horizontal plane. A sphere
87
+ of radius $2$ rests on them. What is the distance from the plane to the top of
88
+ the larger sphere?
89
+ - source_sentence: Two-thirds of the people in a room are seated in three-fourths
90
+ of the chairs. The rest of the people are standing. If there are $6$ empty chairs,
91
+ how many people are in the room?
92
+ sentences:
93
+ - Spinners $A$ and $B$ are spun. On each spinner, the arrow is equally likely to
94
+ land on each number. What is the probability that the product of the two spinners'
95
+ numbers is even?
96
+ - On the AMC 8 contest Billy answers 13 questions correctly, answers 7 questions
97
+ incorrectly and doesn't answer the last 5. What is his score?
98
+ - How many different combinations of \$5 bills and \$2 bills can be used to make
99
+ a total of \$17? Order does not matter in this problem.
100
+ - source_sentence: When $1999^{2000}$ is divided by $5$, the remainder is
101
+ sentences:
102
+ - Square $ABCD$ has sides of length 3. Segments $CM$ and $CN$ divide the square's
103
+ area into three equal parts. How long is segment $CM$?
104
+ - Problems 14, 15 and 16 involve Mrs. Reed's English assignment. A Novel Assignment
105
+ The students in Mrs. Reed's English class are reading the same 760-page novel.
106
+ Three friends, Alice, Bob and Chandra, are in the class. Alice reads a page in
107
+ 20 seconds, Bob reads a page in 45 seconds and Chandra reads a page in 30 seconds.
108
+ Chandra and Bob, who each have a copy of the book, decide that they can save time
109
+ by "team reading" the novel. In this scheme, Chandra will read from page 1 to
110
+ a certain page and Bob will read from the next page through page 760, finishing
111
+ the book. When they are through they will tell each other about the part they
112
+ read. What is the last page that Chandra should read so that she and Bob spend
113
+ the same amount of time reading the novel?
114
+ - $(6?3) + 4 - (2 - 1) = 5.$ To make this statement true, the question mark between
115
+ the 6 and the 3 should be replaced by
116
+ pipeline_tag: sentence-similarity
117
+ library_name: sentence-transformers
118
+ ---
119
+
120
+ # SentenceTransformer based on google/embeddinggemma-300m
121
+
122
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
123
+
124
+ ## Model Details
125
+
126
+ ### Model Description
127
+ - **Model Type:** Sentence Transformer
128
+ - **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision 57c266a740f537b4dc058e1b0cda161fd15afa75 -->
129
+ - **Maximum Sequence Length:** 2048 tokens
130
+ - **Output Dimensionality:** 768 dimensions
131
+ - **Similarity Function:** Cosine Similarity
132
+ <!-- - **Training Dataset:** Unknown -->
133
+ <!-- - **Language:** Unknown -->
134
+ <!-- - **License:** Unknown -->
135
+
136
+ ### Model Sources
137
+
138
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
139
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
140
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
141
+
142
+ ### Full Model Architecture
143
+
144
+ ```
145
+ SentenceTransformer(
146
+ (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
147
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
148
+ (2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
149
+ (3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
150
+ (4): Normalize()
151
+ )
152
+ ```
153
+
154
+ ## Usage
155
+
156
+ ### Direct Usage (Sentence Transformers)
157
+
158
+ First install the Sentence Transformers library:
159
+
160
+ ```bash
161
+ pip install -U sentence-transformers
162
+ ```
163
+
164
+ Then you can load this model and run inference.
165
+ ```python
166
+ from sentence_transformers import SentenceTransformer
167
+
168
+ # Download from the 🤗 Hub
169
+ model = SentenceTransformer("blachang28/poc-gemma")
170
+ # Run inference
171
+ queries = [
172
+ "When $1999^{2000}$ is divided by $5$, the remainder is",
173
+ ]
174
+ documents = [
175
+ "Square $ABCD$ has sides of length 3. Segments $CM$ and $CN$ divide the square's area into three equal parts. How long is segment $CM$?",
176
+ '$(6?3) + 4 - (2 - 1) = 5.$ To make this statement true, the question mark between the 6 and the 3 should be replaced by',
177
+ 'Problems 14, 15 and 16 involve Mrs. Reed\'s English assignment. A Novel Assignment The students in Mrs. Reed\'s English class are reading the same 760-page novel. Three friends, Alice, Bob and Chandra, are in the class. Alice reads a page in 20 seconds, Bob reads a page in 45 seconds and Chandra reads a page in 30 seconds. Chandra and Bob, who each have a copy of the book, decide that they can save time by "team reading" the novel. In this scheme, Chandra will read from page 1 to a certain page and Bob will read from the next page through page 760, finishing the book. When they are through they will tell each other about the part they read. What is the last page that Chandra should read so that she and Bob spend the same amount of time reading the novel?',
178
+ ]
179
+ query_embeddings = model.encode_query(queries)
180
+ document_embeddings = model.encode_document(documents)
181
+ print(query_embeddings.shape, document_embeddings.shape)
182
+ # [1, 768] [3, 768]
183
+
184
+ # Get the similarity scores for the embeddings
185
+ similarities = model.similarity(query_embeddings, document_embeddings)
186
+ print(similarities)
187
+ # tensor([[ 0.7710, -0.4438, -0.0152]])
188
+ ```
189
+
190
+ <!--
191
+ ### Direct Usage (Transformers)
192
+
193
+ <details><summary>Click to see the direct usage in Transformers</summary>
194
+
195
+ </details>
196
+ -->
197
+
198
+ <!--
199
+ ### Downstream Usage (Sentence Transformers)
200
+
201
+ You can finetune this model on your own dataset.
202
+
203
+ <details><summary>Click to expand</summary>
204
+
205
+ </details>
206
+ -->
207
+
208
+ <!--
209
+ ### Out-of-Scope Use
210
+
211
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
212
+ -->
213
+
214
+ <!--
215
+ ## Bias, Risks and Limitations
216
+
217
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
218
+ -->
219
+
220
+ <!--
221
+ ### Recommendations
222
+
223
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
224
+ -->
225
+
226
+ ## Training Details
227
+
228
+ ### Training Dataset
229
+
230
+ #### Unnamed Dataset
231
+
232
+ * Size: 200 training samples
233
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
234
+ * Approximate statistics based on the first 200 samples:
235
+ | | anchor | positive | negative |
236
+ |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
237
+ | type | string | string | string |
238
+ | details | <ul><li>min: 11 tokens</li><li>mean: 81.76 tokens</li><li>max: 806 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 80.08 tokens</li><li>max: 806 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 79.01 tokens</li><li>max: 797 tokens</li></ul> |
239
+ * Samples:
240
+ | anchor | positive | negative |
241
+ |:-------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
242
+ | <code>$(6?3) + 4 - (2 - 1) = 5.$ To make this statement true, the question mark between the 6 and the 3 should be replaced by</code> | <code>What is the degree measure of the smaller angle formed by the hands of a clock at 10 o'clock?</code> | <code>An insect lives on the surface of a regular tetrahedron with edges of length 1. It wishes to travel on the surface of the tetrahedron from the midpoint of one edge to the midpoint of the opposite edge. What is the length of the shortest such trip? (Note: Two edges of a tetrahedron are opposite if they have no common endpoint.)</code> |
243
+ | <code>What is the degree measure of the smaller angle formed by the hands of a clock at 10 o'clock?</code> | <code>Which triplet of numbers has a sum NOT equal to 1?</code> | <code>Corners are sliced off a unit cube so that the six faces each become regular octagons. What is the total volume of the removed tetrahedra?</code> |
244
+ | <code>Which triplet of numbers has a sum NOT equal to 1?</code> | <code>What is the degree measure of the smaller angle formed by the hands of a clock at 10 o'clock?</code> | <code>How many pairs of positive integers $(a,b)$ are there such that $\text{gcd}(a,b)=1$ and $\frac{a}{b} + \frac{14b}{9a}$ is an integer? $\mathrm {(A)}\ 4\quad\mathrm {(B)}\ 6\quad\mathrm {(C)}\ 9\quad\mathrm {(D)}\ 12\quad\mathrm {(E)}\ \text{infinitely many}$</code> |
245
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
246
+ ```json
247
+ {
248
+ "scale": 20.0,
249
+ "similarity_fct": "cos_sim",
250
+ "gather_across_devices": false
251
+ }
252
+ ```
253
+
254
+ ### Training Hyperparameters
255
+ #### Non-Default Hyperparameters
256
+
257
+ - `per_device_train_batch_size`: 1
258
+ - `learning_rate`: 2e-05
259
+ - `num_train_epochs`: 5
260
+ - `warmup_ratio`: 0.1
261
+ - `prompts`: task: classification | query:
262
+
263
+ #### All Hyperparameters
264
+ <details><summary>Click to expand</summary>
265
+
266
+ - `overwrite_output_dir`: False
267
+ - `do_predict`: False
268
+ - `eval_strategy`: no
269
+ - `prediction_loss_only`: True
270
+ - `per_device_train_batch_size`: 1
271
+ - `per_device_eval_batch_size`: 8
272
+ - `per_gpu_train_batch_size`: None
273
+ - `per_gpu_eval_batch_size`: None
274
+ - `gradient_accumulation_steps`: 1
275
+ - `eval_accumulation_steps`: None
276
+ - `torch_empty_cache_steps`: None
277
+ - `learning_rate`: 2e-05
278
+ - `weight_decay`: 0.0
279
+ - `adam_beta1`: 0.9
280
+ - `adam_beta2`: 0.999
281
+ - `adam_epsilon`: 1e-08
282
+ - `max_grad_norm`: 1.0
283
+ - `num_train_epochs`: 5
284
+ - `max_steps`: -1
285
+ - `lr_scheduler_type`: linear
286
+ - `lr_scheduler_kwargs`: {}
287
+ - `warmup_ratio`: 0.1
288
+ - `warmup_steps`: 0
289
+ - `log_level`: passive
290
+ - `log_level_replica`: warning
291
+ - `log_on_each_node`: True
292
+ - `logging_nan_inf_filter`: True
293
+ - `save_safetensors`: True
294
+ - `save_on_each_node`: False
295
+ - `save_only_model`: False
296
+ - `restore_callback_states_from_checkpoint`: False
297
+ - `no_cuda`: False
298
+ - `use_cpu`: False
299
+ - `use_mps_device`: False
300
+ - `seed`: 42
301
+ - `data_seed`: None
302
+ - `jit_mode_eval`: False
303
+ - `bf16`: False
304
+ - `fp16`: False
305
+ - `fp16_opt_level`: O1
306
+ - `half_precision_backend`: auto
307
+ - `bf16_full_eval`: False
308
+ - `fp16_full_eval`: False
309
+ - `tf32`: None
310
+ - `local_rank`: 0
311
+ - `ddp_backend`: None
312
+ - `tpu_num_cores`: None
313
+ - `tpu_metrics_debug`: False
314
+ - `debug`: []
315
+ - `dataloader_drop_last`: False
316
+ - `dataloader_num_workers`: 0
317
+ - `dataloader_prefetch_factor`: None
318
+ - `past_index`: -1
319
+ - `disable_tqdm`: False
320
+ - `remove_unused_columns`: True
321
+ - `label_names`: None
322
+ - `load_best_model_at_end`: False
323
+ - `ignore_data_skip`: False
324
+ - `fsdp`: []
325
+ - `fsdp_min_num_params`: 0
326
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
327
+ - `fsdp_transformer_layer_cls_to_wrap`: None
328
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
329
+ - `parallelism_config`: None
330
+ - `deepspeed`: None
331
+ - `label_smoothing_factor`: 0.0
332
+ - `optim`: adamw_torch_fused
333
+ - `optim_args`: None
334
+ - `adafactor`: False
335
+ - `group_by_length`: False
336
+ - `length_column_name`: length
337
+ - `project`: huggingface
338
+ - `trackio_space_id`: trackio
339
+ - `ddp_find_unused_parameters`: None
340
+ - `ddp_bucket_cap_mb`: None
341
+ - `ddp_broadcast_buffers`: False
342
+ - `dataloader_pin_memory`: True
343
+ - `dataloader_persistent_workers`: False
344
+ - `skip_memory_metrics`: True
345
+ - `use_legacy_prediction_loop`: False
346
+ - `push_to_hub`: False
347
+ - `resume_from_checkpoint`: None
348
+ - `hub_model_id`: None
349
+ - `hub_strategy`: every_save
350
+ - `hub_private_repo`: None
351
+ - `hub_always_push`: False
352
+ - `hub_revision`: None
353
+ - `gradient_checkpointing`: False
354
+ - `gradient_checkpointing_kwargs`: None
355
+ - `include_inputs_for_metrics`: False
356
+ - `include_for_metrics`: []
357
+ - `eval_do_concat_batches`: True
358
+ - `fp16_backend`: auto
359
+ - `push_to_hub_model_id`: None
360
+ - `push_to_hub_organization`: None
361
+ - `mp_parameters`:
362
+ - `auto_find_batch_size`: False
363
+ - `full_determinism`: False
364
+ - `torchdynamo`: None
365
+ - `ray_scope`: last
366
+ - `ddp_timeout`: 1800
367
+ - `torch_compile`: False
368
+ - `torch_compile_backend`: None
369
+ - `torch_compile_mode`: None
370
+ - `include_tokens_per_second`: False
371
+ - `include_num_input_tokens_seen`: no
372
+ - `neftune_noise_alpha`: None
373
+ - `optim_target_modules`: None
374
+ - `batch_eval_metrics`: False
375
+ - `eval_on_start`: False
376
+ - `use_liger_kernel`: False
377
+ - `liger_kernel_config`: None
378
+ - `eval_use_gather_object`: False
379
+ - `average_tokens_across_devices`: True
380
+ - `prompts`: task: classification | query:
381
+ - `batch_sampler`: batch_sampler
382
+ - `multi_dataset_batch_sampler`: proportional
383
+ - `router_mapping`: {}
384
+ - `learning_rate_mapping`: {}
385
+
386
+ </details>
387
+
388
+ ### Training Logs
389
+ | Epoch | Step | Training Loss |
390
+ |:-----:|:----:|:-------------:|
391
+ | 1.0 | 200 | 1.3954 |
392
+ | 2.0 | 400 | 0.8112 |
393
+ | 3.0 | 600 | 0.0855 |
394
+ | 4.0 | 800 | 0.0529 |
395
+ | 5.0 | 1000 | 0.0018 |
396
+
397
+
398
+ ### Framework Versions
399
+ - Python: 3.12.12
400
+ - Sentence Transformers: 5.1.2
401
+ - Transformers: 4.57.2
402
+ - PyTorch: 2.9.0+cu126
403
+ - Accelerate: 1.12.0
404
+ - Datasets: 4.0.0
405
+ - Tokenizers: 0.22.1
406
+
407
+ ## Citation
408
+
409
+ ### BibTeX
410
+
411
+ #### Sentence Transformers
412
+ ```bibtex
413
+ @inproceedings{reimers-2019-sentence-bert,
414
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
415
+ author = "Reimers, Nils and Gurevych, Iryna",
416
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
417
+ month = "11",
418
+ year = "2019",
419
+ publisher = "Association for Computational Linguistics",
420
+ url = "https://arxiv.org/abs/1908.10084",
421
+ }
422
+ ```
423
+
424
+ #### MultipleNegativesRankingLoss
425
+ ```bibtex
426
+ @misc{henderson2017efficient,
427
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
428
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
429
+ year={2017},
430
+ eprint={1705.00652},
431
+ archivePrefix={arXiv},
432
+ primaryClass={cs.CL}
433
+ }
434
+ ```
435
+
436
+ <!--
437
+ ## Glossary
438
+
439
+ *Clearly define terms in order to be accessible across audiences.*
440
+ -->
441
+
442
+ <!--
443
+ ## Model Card Authors
444
+
445
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
446
+ -->
447
+
448
+ <!--
449
+ ## Model Card Contact
450
+
451
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
452
+ -->
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
config.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_sliding_window_pattern": 6,
3
+ "architectures": [
4
+ "Gemma3TextModel"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "attn_logit_softcapping": null,
9
+ "bos_token_id": 2,
10
+ "dtype": "float32",
11
+ "eos_token_id": 1,
12
+ "final_logit_softcapping": null,
13
+ "head_dim": 256,
14
+ "hidden_activation": "gelu_pytorch_tanh",
15
+ "hidden_size": 768,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 1152,
18
+ "layer_types": [
19
+ "sliding_attention",
20
+ "sliding_attention",
21
+ "sliding_attention",
22
+ "sliding_attention",
23
+ "sliding_attention",
24
+ "full_attention",
25
+ "sliding_attention",
26
+ "sliding_attention",
27
+ "sliding_attention",
28
+ "sliding_attention",
29
+ "sliding_attention",
30
+ "full_attention",
31
+ "sliding_attention",
32
+ "sliding_attention",
33
+ "sliding_attention",
34
+ "sliding_attention",
35
+ "sliding_attention",
36
+ "full_attention",
37
+ "sliding_attention",
38
+ "sliding_attention",
39
+ "sliding_attention",
40
+ "sliding_attention",
41
+ "sliding_attention",
42
+ "full_attention"
43
+ ],
44
+ "max_position_embeddings": 2048,
45
+ "model_type": "gemma3_text",
46
+ "num_attention_heads": 3,
47
+ "num_hidden_layers": 24,
48
+ "num_key_value_heads": 1,
49
+ "pad_token_id": 0,
50
+ "query_pre_attn_scalar": 256,
51
+ "rms_norm_eps": 1e-06,
52
+ "rope_local_base_freq": 10000.0,
53
+ "rope_scaling": null,
54
+ "rope_theta": 1000000.0,
55
+ "sliding_window": 257,
56
+ "transformers_version": "4.57.2",
57
+ "use_bidirectional_attention": true,
58
+ "use_cache": true,
59
+ "vocab_size": 262144
60
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.1.2",
5
+ "transformers": "4.57.2",
6
+ "pytorch": "2.9.0+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "task: search result | query: ",
10
+ "document": "title: none | text: ",
11
+ "BitextMining": "task: search result | query: ",
12
+ "Clustering": "task: clustering | query: ",
13
+ "Classification": "task: classification | query: ",
14
+ "InstructionRetrieval": "task: code retrieval | query: ",
15
+ "MultilabelClassification": "task: classification | query: ",
16
+ "PairClassification": "task: sentence similarity | query: ",
17
+ "Reranking": "task: search result | query: ",
18
+ "Retrieval": "task: search result | query: ",
19
+ "Retrieval-query": "task: search result | query: ",
20
+ "Retrieval-document": "title: none | text: ",
21
+ "STS": "task: sentence similarity | query: ",
22
+ "Summarization": "task: summarization | query: "
23
+ },
24
+ "default_prompt_name": null,
25
+ "similarity_fn_name": "cosine"
26
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e8c2af920e027ad90a72df8f9f7b841e72bc84c0adf773adb1dc0fac06db901
3
+ size 1211486072
modules.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ },
20
+ {
21
+ "idx": 3,
22
+ "name": "3",
23
+ "path": "3_Dense",
24
+ "type": "sentence_transformers.models.Dense"
25
+ },
26
+ {
27
+ "idx": 4,
28
+ "name": "4",
29
+ "path": "4_Normalize",
30
+ "type": "sentence_transformers.models.Normalize"
31
+ }
32
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 2048,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:216e2a79606fe879c9f17c529c71cd241338407fd5646b595ffd3c4b9ea1d503
3
+ size 33385262
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff