LamaDiab commited on
Commit
e452978
·
verified ·
1 Parent(s): 442a166

Updating model weights

Browse files
Files changed (1) hide show
  1. README.md +40 -4
README.md CHANGED
@@ -37,6 +37,21 @@ widget:
37
  - kids game
38
  pipeline_tag: sentence-similarity
39
  library_name: sentence-transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ---
41
 
42
  # SentenceTransformer
@@ -100,9 +115,9 @@ print(embeddings.shape)
100
  # Get the similarity scores for the embeddings
101
  similarities = model.similarity(embeddings, embeddings)
102
  print(similarities)
103
- # tensor([[1.0000, 0.7198, 0.3823],
104
- # [0.7198, 1.0000, 0.3737],
105
- # [0.3823, 0.3737, 1.0000]])
106
  ```
107
 
108
  <!--
@@ -129,6 +144,18 @@ You can finetune this model on your own dataset.
129
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
130
  -->
131
 
 
 
 
 
 
 
 
 
 
 
 
 
132
  <!--
133
  ## Bias, Risks and Limitations
134
 
@@ -202,6 +229,7 @@ You can finetune this model on your own dataset.
202
  - `per_device_train_batch_size`: 128
203
  - `per_device_eval_batch_size`: 128
204
  - `weight_decay`: 0.001
 
205
  - `warmup_steps`: 2733
206
  - `fp16`: True
207
  - `dataloader_num_workers`: 2
@@ -232,7 +260,7 @@ You can finetune this model on your own dataset.
232
  - `adam_beta2`: 0.999
233
  - `adam_epsilon`: 1e-08
234
  - `max_grad_norm`: 1.0
235
- - `num_train_epochs`: 3
236
  - `max_steps`: -1
237
  - `lr_scheduler_type`: linear
238
  - `lr_scheduler_kwargs`: {}
@@ -335,6 +363,14 @@ You can finetune this model on your own dataset.
335
 
336
  </details>
337
 
 
 
 
 
 
 
 
 
338
  ### Framework Versions
339
  - Python: 3.11.13
340
  - Sentence Transformers: 5.1.2
 
37
  - kids game
38
  pipeline_tag: sentence-similarity
39
  library_name: sentence-transformers
40
+ metrics:
41
+ - cosine_accuracy
42
+ model-index:
43
+ - name: SentenceTransformer
44
+ results:
45
+ - task:
46
+ type: triplet
47
+ name: Triplet
48
+ dataset:
49
+ name: Unknown
50
+ type: unknown
51
+ metrics:
52
+ - type: cosine_accuracy
53
+ value: 0.945607602596283
54
+ name: Cosine Accuracy
55
  ---
56
 
57
  # SentenceTransformer
 
115
  # Get the similarity scores for the embeddings
116
  similarities = model.similarity(embeddings, embeddings)
117
  print(similarities)
118
+ # tensor([[1.0000, 0.7013, 0.2786],
119
+ # [0.7013, 1.0000, 0.2947],
120
+ # [0.2786, 0.2947, 1.0000]])
121
  ```
122
 
123
  <!--
 
144
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
145
  -->
146
 
147
+ ## Evaluation
148
+
149
+ ### Metrics
150
+
151
+ #### Triplet
152
+
153
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
154
+
155
+ | Metric | Value |
156
+ |:--------------------|:-----------|
157
+ | **cosine_accuracy** | **0.9456** |
158
+
159
  <!--
160
  ## Bias, Risks and Limitations
161
 
 
229
  - `per_device_train_batch_size`: 128
230
  - `per_device_eval_batch_size`: 128
231
  - `weight_decay`: 0.001
232
+ - `num_train_epochs`: 6
233
  - `warmup_steps`: 2733
234
  - `fp16`: True
235
  - `dataloader_num_workers`: 2
 
260
  - `adam_beta2`: 0.999
261
  - `adam_epsilon`: 1e-08
262
  - `max_grad_norm`: 1.0
263
+ - `num_train_epochs`: 6
264
  - `max_steps`: -1
265
  - `lr_scheduler_type`: linear
266
  - `lr_scheduler_kwargs`: {}
 
363
 
364
  </details>
365
 
366
+ ### Training Logs
367
+ | Epoch | Step | Training Loss | Validation Loss | cosine_accuracy |
368
+ |:-----:|:-----:|:-------------:|:---------------:|:---------------:|
369
+ | 4.0 | 9112 | 1.4316 | 0.7736 | 0.9375 |
370
+ | 5.0 | 11390 | 1.3415 | 0.7541 | 0.9435 |
371
+ | 6.0 | 13668 | 1.2848 | 0.7366 | 0.9456 |
372
+
373
+
374
  ### Framework Versions
375
  - Python: 3.11.13
376
  - Sentence Transformers: 5.1.2