Training in progress, epoch 4, checkpoint
Browse files
last-checkpoint/README.md
CHANGED
|
@@ -7,7 +7,6 @@ tags:
|
|
| 7 |
- generated_from_trainer
|
| 8 |
- dataset_size:291522
|
| 9 |
- loss:MultipleNegativesSymmetricRankingLoss
|
| 10 |
-
base_model: sentence-transformers/all-MiniLM-L6-v2
|
| 11 |
widget:
|
| 12 |
- source_sentence: cream 21 baby oil with almond oil
|
| 13 |
sentences:
|
|
@@ -41,7 +40,7 @@ library_name: sentence-transformers
|
|
| 41 |
metrics:
|
| 42 |
- cosine_accuracy
|
| 43 |
model-index:
|
| 44 |
-
- name: SentenceTransformer
|
| 45 |
results:
|
| 46 |
- task:
|
| 47 |
type: triplet
|
|
@@ -51,19 +50,19 @@ model-index:
|
|
| 51 |
type: unknown
|
| 52 |
metrics:
|
| 53 |
- type: cosine_accuracy
|
| 54 |
-
value: 0.
|
| 55 |
name: Cosine Accuracy
|
| 56 |
---
|
| 57 |
|
| 58 |
-
# SentenceTransformer
|
| 59 |
|
| 60 |
-
This is a [sentence-transformers](https://www.SBERT.net) model
|
| 61 |
|
| 62 |
## Model Details
|
| 63 |
|
| 64 |
### Model Description
|
| 65 |
- **Model Type:** Sentence Transformer
|
| 66 |
-
- **Base model:** [
|
| 67 |
- **Maximum Sequence Length:** 256 tokens
|
| 68 |
- **Output Dimensionality:** 384 dimensions
|
| 69 |
- **Similarity Function:** Cosine Similarity
|
|
@@ -116,9 +115,9 @@ print(embeddings.shape)
|
|
| 116 |
# Get the similarity scores for the embeddings
|
| 117 |
similarities = model.similarity(embeddings, embeddings)
|
| 118 |
print(similarities)
|
| 119 |
-
# tensor([[1.0000, 0.
|
| 120 |
-
# [0.
|
| 121 |
-
# [0.
|
| 122 |
```
|
| 123 |
|
| 124 |
<!--
|
|
@@ -155,7 +154,7 @@ You can finetune this model on your own dataset.
|
|
| 155 |
|
| 156 |
| Metric | Value |
|
| 157 |
|:--------------------|:-----------|
|
| 158 |
-
| **cosine_accuracy** | **0.
|
| 159 |
|
| 160 |
<!--
|
| 161 |
## Bias, Risks and Limitations
|
|
@@ -230,6 +229,7 @@ You can finetune this model on your own dataset.
|
|
| 230 |
- `per_device_train_batch_size`: 256
|
| 231 |
- `per_device_eval_batch_size`: 256
|
| 232 |
- `weight_decay`: 0.001
|
|
|
|
| 233 |
- `warmup_steps`: 1138
|
| 234 |
- `fp16`: True
|
| 235 |
- `dataloader_num_workers`: 4
|
|
@@ -260,7 +260,7 @@ You can finetune this model on your own dataset.
|
|
| 260 |
- `adam_beta2`: 0.999
|
| 261 |
- `adam_epsilon`: 1e-08
|
| 262 |
- `max_grad_norm`: 1.0
|
| 263 |
-
- `num_train_epochs`:
|
| 264 |
- `max_steps`: -1
|
| 265 |
- `lr_scheduler_type`: linear
|
| 266 |
- `lr_scheduler_kwargs`: {}
|
|
@@ -364,13 +364,9 @@ You can finetune this model on your own dataset.
|
|
| 364 |
</details>
|
| 365 |
|
| 366 |
### Training Logs
|
| 367 |
-
| Epoch
|
| 368 |
-
|
| 369 |
-
|
|
| 370 |
-
| 0.0009 | 1 | 5.8495 | - | - |
|
| 371 |
-
| 1.0 | 1139 | 3.0136 | 0.8482 | 0.9113 |
|
| 372 |
-
| 2.0 | 2278 | 2.2096 | 0.7465 | 0.9241 |
|
| 373 |
-
| 3.0 | 3417 | 1.966 | 0.6980 | 0.9337 |
|
| 374 |
|
| 375 |
|
| 376 |
### Framework Versions
|
|
|
|
| 7 |
- generated_from_trainer
|
| 8 |
- dataset_size:291522
|
| 9 |
- loss:MultipleNegativesSymmetricRankingLoss
|
|
|
|
| 10 |
widget:
|
| 11 |
- source_sentence: cream 21 baby oil with almond oil
|
| 12 |
sentences:
|
|
|
|
| 40 |
metrics:
|
| 41 |
- cosine_accuracy
|
| 42 |
model-index:
|
| 43 |
+
- name: SentenceTransformer
|
| 44 |
results:
|
| 45 |
- task:
|
| 46 |
type: triplet
|
|
|
|
| 50 |
type: unknown
|
| 51 |
metrics:
|
| 52 |
- type: cosine_accuracy
|
| 53 |
+
value: 0.9330878257751465
|
| 54 |
name: Cosine Accuracy
|
| 55 |
---
|
| 56 |
|
| 57 |
+
# SentenceTransformer
|
| 58 |
|
| 59 |
+
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
|
| 60 |
|
| 61 |
## Model Details
|
| 62 |
|
| 63 |
### Model Description
|
| 64 |
- **Model Type:** Sentence Transformer
|
| 65 |
+
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
|
| 66 |
- **Maximum Sequence Length:** 256 tokens
|
| 67 |
- **Output Dimensionality:** 384 dimensions
|
| 68 |
- **Similarity Function:** Cosine Similarity
|
|
|
|
| 115 |
# Get the similarity scores for the embeddings
|
| 116 |
similarities = model.similarity(embeddings, embeddings)
|
| 117 |
print(similarities)
|
| 118 |
+
# tensor([[1.0000, 0.7646, 0.3896],
|
| 119 |
+
# [0.7646, 1.0000, 0.3910],
|
| 120 |
+
# [0.3896, 0.3910, 1.0000]])
|
| 121 |
```
|
| 122 |
|
| 123 |
<!--
|
|
|
|
| 154 |
|
| 155 |
| Metric | Value |
|
| 156 |
|:--------------------|:-----------|
|
| 157 |
+
| **cosine_accuracy** | **0.9331** |
|
| 158 |
|
| 159 |
<!--
|
| 160 |
## Bias, Risks and Limitations
|
|
|
|
| 229 |
- `per_device_train_batch_size`: 256
|
| 230 |
- `per_device_eval_batch_size`: 256
|
| 231 |
- `weight_decay`: 0.001
|
| 232 |
+
- `num_train_epochs`: 5
|
| 233 |
- `warmup_steps`: 1138
|
| 234 |
- `fp16`: True
|
| 235 |
- `dataloader_num_workers`: 4
|
|
|
|
| 260 |
- `adam_beta2`: 0.999
|
| 261 |
- `adam_epsilon`: 1e-08
|
| 262 |
- `max_grad_norm`: 1.0
|
| 263 |
+
- `num_train_epochs`: 5
|
| 264 |
- `max_steps`: -1
|
| 265 |
- `lr_scheduler_type`: linear
|
| 266 |
- `lr_scheduler_kwargs`: {}
|
|
|
|
| 364 |
</details>
|
| 365 |
|
| 366 |
### Training Logs
|
| 367 |
+
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy |
|
| 368 |
+
|:-----:|:----:|:-------------:|:---------------:|:---------------:|
|
| 369 |
+
| 4.0 | 4556 | 1.8731 | 0.7003 | 0.9331 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 370 |
|
| 371 |
|
| 372 |
### Framework Versions
|
last-checkpoint/model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 90864192
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2752eb8ccd6b0b27067ab743dbf88be10d7f0ec0acd07f271adac6466a8f5533
|
| 3 |
size 90864192
|
last-checkpoint/optimizer.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 180607738
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c1aba1b25c91aca84ea8033a997c1f341126a19d27ece980f574b412e4f2590
|
| 3 |
size 180607738
|
last-checkpoint/rng_state.pth
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 14244
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a52c27da178d4afa8ec10888c749f1ddbdcd600f8a415730f56aaf2049e7873
|
| 3 |
size 14244
|
last-checkpoint/scaler.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 988
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62e15eeadf456bba5e7d820c26dd489bf5b9eaafa907595755c9e54b984e4c99
|
| 3 |
size 988
|
last-checkpoint/scheduler.pt
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1064
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3090648633fa78302c01f4e8a9a19ea7264f91a0a6f0c04a8378cf412f6c472b
|
| 3 |
size 1064
|
last-checkpoint/trainer_state.json
CHANGED
|
@@ -2,9 +2,9 @@
|
|
| 2 |
"best_global_step": null,
|
| 3 |
"best_metric": null,
|
| 4 |
"best_model_checkpoint": null,
|
| 5 |
-
"epoch":
|
| 6 |
"eval_steps": 500,
|
| 7 |
-
"global_step":
|
| 8 |
"is_hyper_param_search": false,
|
| 9 |
"is_local_process_zero": true,
|
| 10 |
"is_world_process_zero": true,
|
|
@@ -63,12 +63,28 @@
|
|
| 63 |
"eval_samples_per_second": 274.957,
|
| 64 |
"eval_steps_per_second": 1.099,
|
| 65 |
"step": 3417
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
}
|
| 67 |
],
|
| 68 |
"logging_steps": 500,
|
| 69 |
-
"max_steps":
|
| 70 |
"num_input_tokens_seen": 0,
|
| 71 |
-
"num_train_epochs":
|
| 72 |
"save_steps": 500,
|
| 73 |
"stateful_callbacks": {
|
| 74 |
"TrainerControl": {
|
|
@@ -77,7 +93,7 @@
|
|
| 77 |
"should_evaluate": false,
|
| 78 |
"should_log": false,
|
| 79 |
"should_save": true,
|
| 80 |
-
"should_training_stop":
|
| 81 |
},
|
| 82 |
"attributes": {}
|
| 83 |
}
|
|
|
|
| 2 |
"best_global_step": null,
|
| 3 |
"best_metric": null,
|
| 4 |
"best_model_checkpoint": null,
|
| 5 |
+
"epoch": 4.0,
|
| 6 |
"eval_steps": 500,
|
| 7 |
+
"global_step": 4556,
|
| 8 |
"is_hyper_param_search": false,
|
| 9 |
"is_local_process_zero": true,
|
| 10 |
"is_world_process_zero": true,
|
|
|
|
| 63 |
"eval_samples_per_second": 274.957,
|
| 64 |
"eval_steps_per_second": 1.099,
|
| 65 |
"step": 3417
|
| 66 |
+
},
|
| 67 |
+
{
|
| 68 |
+
"epoch": 4.0,
|
| 69 |
+
"grad_norm": 9.933394432067871,
|
| 70 |
+
"learning_rate": 1.251920122887865e-05,
|
| 71 |
+
"loss": 1.8731,
|
| 72 |
+
"step": 4556
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"epoch": 4.0,
|
| 76 |
+
"eval_cosine_accuracy": 0.9330878257751465,
|
| 77 |
+
"eval_loss": 0.700294554233551,
|
| 78 |
+
"eval_runtime": 33.7886,
|
| 79 |
+
"eval_samples_per_second": 281.308,
|
| 80 |
+
"eval_steps_per_second": 1.125,
|
| 81 |
+
"step": 4556
|
| 82 |
}
|
| 83 |
],
|
| 84 |
"logging_steps": 500,
|
| 85 |
+
"max_steps": 5695,
|
| 86 |
"num_input_tokens_seen": 0,
|
| 87 |
+
"num_train_epochs": 5,
|
| 88 |
"save_steps": 500,
|
| 89 |
"stateful_callbacks": {
|
| 90 |
"TrainerControl": {
|
|
|
|
| 93 |
"should_evaluate": false,
|
| 94 |
"should_log": false,
|
| 95 |
"should_save": true,
|
| 96 |
+
"should_training_stop": false
|
| 97 |
},
|
| 98 |
"attributes": {}
|
| 99 |
}
|
last-checkpoint/training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5752
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b486c431839a52a655f907407f9ba44cb3ea4d7311c1b455747f81e30659d4c4
|
| 3 |
size 5752
|