Updating model weights
Browse files
README.md
CHANGED
|
@@ -36,6 +36,21 @@ widget:
|
|
| 36 |
- children book
|
| 37 |
pipeline_tag: sentence-similarity
|
| 38 |
library_name: sentence-transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
---
|
| 40 |
|
| 41 |
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
|
|
@@ -99,9 +114,9 @@ print(embeddings.shape)
|
|
| 99 |
# Get the similarity scores for the embeddings
|
| 100 |
similarities = model.similarity(embeddings, embeddings)
|
| 101 |
print(similarities)
|
| 102 |
-
# tensor([[1.0000, 0.
|
| 103 |
-
# [0.
|
| 104 |
-
# [0.
|
| 105 |
```
|
| 106 |
|
| 107 |
<!--
|
|
@@ -128,6 +143,18 @@ You can finetune this model on your own dataset.
|
|
| 128 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 129 |
-->
|
| 130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
<!--
|
| 132 |
## Bias, Risks and Limitations
|
| 133 |
|
|
@@ -334,6 +361,19 @@ You can finetune this model on your own dataset.
|
|
| 334 |
|
| 335 |
</details>
|
| 336 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 337 |
### Framework Versions
|
| 338 |
- Python: 3.11.13
|
| 339 |
- Sentence Transformers: 5.1.2
|
|
|
|
| 36 |
- children book
|
| 37 |
pipeline_tag: sentence-similarity
|
| 38 |
library_name: sentence-transformers
|
| 39 |
+
metrics:
|
| 40 |
+
- cosine_accuracy
|
| 41 |
+
model-index:
|
| 42 |
+
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
|
| 43 |
+
results:
|
| 44 |
+
- task:
|
| 45 |
+
type: triplet
|
| 46 |
+
name: Triplet
|
| 47 |
+
dataset:
|
| 48 |
+
name: Unknown
|
| 49 |
+
type: unknown
|
| 50 |
+
metrics:
|
| 51 |
+
- type: cosine_accuracy
|
| 52 |
+
value: 0.9443684816360474
|
| 53 |
+
name: Cosine Accuracy
|
| 54 |
---
|
| 55 |
|
| 56 |
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
|
|
|
|
| 114 |
# Get the similarity scores for the embeddings
|
| 115 |
similarities = model.similarity(embeddings, embeddings)
|
| 116 |
print(similarities)
|
| 117 |
+
# tensor([[1.0000, 0.5029, 0.2692],
|
| 118 |
+
# [0.5029, 1.0000, 0.3907],
|
| 119 |
+
# [0.2692, 0.3907, 1.0000]])
|
| 120 |
```
|
| 121 |
|
| 122 |
<!--
|
|
|
|
| 143 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 144 |
-->
|
| 145 |
|
| 146 |
+
## Evaluation
|
| 147 |
+
|
| 148 |
+
### Metrics
|
| 149 |
+
|
| 150 |
+
#### Triplet
|
| 151 |
+
|
| 152 |
+
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
|
| 153 |
+
|
| 154 |
+
| Metric | Value |
|
| 155 |
+
|:--------------------|:-----------|
|
| 156 |
+
| **cosine_accuracy** | **0.9444** |
|
| 157 |
+
|
| 158 |
<!--
|
| 159 |
## Bias, Risks and Limitations
|
| 160 |
|
|
|
|
| 361 |
|
| 362 |
</details>
|
| 363 |
|
| 364 |
+
### Training Logs
|
| 365 |
+
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy |
|
| 366 |
+
|:------:|:----:|:-------------:|:---------------:|:---------------:|
|
| 367 |
+
| 0.0004 | 1 | 4.1437 | - | - |
|
| 368 |
+
| 0.4232 | 1000 | 3.6457 | 0.9829 | 0.9363 |
|
| 369 |
+
| 0.8464 | 2000 | 2.6898 | 0.9383 | 0.9378 |
|
| 370 |
+
| 1.2693 | 3000 | 2.4448 | 0.9807 | 0.9374 |
|
| 371 |
+
| 1.6922 | 4000 | 2.6158 | 0.9553 | 0.9423 |
|
| 372 |
+
| 2.1150 | 5000 | 2.5157 | 0.9650 | 0.9416 |
|
| 373 |
+
| 2.5378 | 6000 | 2.3962 | 0.9475 | 0.9438 |
|
| 374 |
+
| 2.9607 | 7000 | 2.4768 | 0.9481 | 0.9444 |
|
| 375 |
+
|
| 376 |
+
|
| 377 |
### Framework Versions
|
| 378 |
- Python: 3.11.13
|
| 379 |
- Sentence Transformers: 5.1.2
|