Training in progress, step 3510
Browse files- Information-Retrieval_evaluation_val_results.csv +2 -0
- README.md +72 -210
- eval/Information-Retrieval_evaluation_val_results.csv +36 -0
- final_metrics.json +16 -0
- model.safetensors +1 -1
- training_args.bin +1 -1
Information-Retrieval_evaluation_val_results.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
epoch,steps,cosine-Accuracy@1,cosine-Accuracy@3,cosine-Accuracy@5,cosine-Precision@1,cosine-Recall@1,cosine-Precision@3,cosine-Recall@3,cosine-Precision@5,cosine-Recall@5,cosine-MRR@1,cosine-MRR@5,cosine-MRR@10,cosine-NDCG@10,cosine-MAP@100
|
| 2 |
+
-1,-1,0.908,0.9684,0.9834,0.908,0.908,0.3228,0.9684,0.19667999999999997,0.9834,0.908,0.9386633333333337,0.9400269841269848,0.9532296698470627,0.9404621256346036
|
README.md
CHANGED
|
@@ -5,108 +5,38 @@ tags:
|
|
| 5 |
- feature-extraction
|
| 6 |
- dense
|
| 7 |
- generated_from_trainer
|
| 8 |
-
- dataset_size:
|
| 9 |
- loss:MultipleNegativesRankingLoss
|
| 10 |
base_model: prajjwal1/bert-small
|
| 11 |
widget:
|
| 12 |
-
- source_sentence: How do I
|
| 13 |
sentences:
|
| 14 |
-
-
|
| 15 |
-
- How do I
|
| 16 |
-
- What
|
| 17 |
-
- source_sentence:
|
| 18 |
sentences:
|
| 19 |
-
-
|
| 20 |
-
- What
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
- source_sentence:
|
| 24 |
sentences:
|
| 25 |
-
-
|
| 26 |
-
-
|
| 27 |
-
-
|
| 28 |
-
- source_sentence: What are
|
| 29 |
sentences:
|
| 30 |
-
- What are
|
| 31 |
-
-
|
| 32 |
-
- What are some
|
| 33 |
-
- source_sentence: What
|
| 34 |
-
at Opus Bank?
|
| 35 |
sentences:
|
| 36 |
-
-
|
| 37 |
-
|
| 38 |
-
-
|
| 39 |
-
Bank ?
|
| 40 |
-
- What are some tips on making it through the job interview process at Opus Bank?
|
| 41 |
pipeline_tag: sentence-similarity
|
| 42 |
library_name: sentence-transformers
|
| 43 |
-
metrics:
|
| 44 |
-
- cosine_accuracy@1
|
| 45 |
-
- cosine_accuracy@3
|
| 46 |
-
- cosine_accuracy@5
|
| 47 |
-
- cosine_precision@1
|
| 48 |
-
- cosine_precision@3
|
| 49 |
-
- cosine_precision@5
|
| 50 |
-
- cosine_recall@1
|
| 51 |
-
- cosine_recall@3
|
| 52 |
-
- cosine_recall@5
|
| 53 |
-
- cosine_ndcg@10
|
| 54 |
-
- cosine_mrr@1
|
| 55 |
-
- cosine_mrr@5
|
| 56 |
-
- cosine_mrr@10
|
| 57 |
-
- cosine_map@100
|
| 58 |
-
model-index:
|
| 59 |
-
- name: SentenceTransformer based on prajjwal1/bert-small
|
| 60 |
-
results:
|
| 61 |
-
- task:
|
| 62 |
-
type: information-retrieval
|
| 63 |
-
name: Information Retrieval
|
| 64 |
-
dataset:
|
| 65 |
-
name: val
|
| 66 |
-
type: val
|
| 67 |
-
metrics:
|
| 68 |
-
- type: cosine_accuracy@1
|
| 69 |
-
value: 0.903
|
| 70 |
-
name: Cosine Accuracy@1
|
| 71 |
-
- type: cosine_accuracy@3
|
| 72 |
-
value: 0.9652
|
| 73 |
-
name: Cosine Accuracy@3
|
| 74 |
-
- type: cosine_accuracy@5
|
| 75 |
-
value: 0.9802
|
| 76 |
-
name: Cosine Accuracy@5
|
| 77 |
-
- type: cosine_precision@1
|
| 78 |
-
value: 0.903
|
| 79 |
-
name: Cosine Precision@1
|
| 80 |
-
- type: cosine_precision@3
|
| 81 |
-
value: 0.32173333333333337
|
| 82 |
-
name: Cosine Precision@3
|
| 83 |
-
- type: cosine_precision@5
|
| 84 |
-
value: 0.19603999999999996
|
| 85 |
-
name: Cosine Precision@5
|
| 86 |
-
- type: cosine_recall@1
|
| 87 |
-
value: 0.903
|
| 88 |
-
name: Cosine Recall@1
|
| 89 |
-
- type: cosine_recall@3
|
| 90 |
-
value: 0.9652
|
| 91 |
-
name: Cosine Recall@3
|
| 92 |
-
- type: cosine_recall@5
|
| 93 |
-
value: 0.9802
|
| 94 |
-
name: Cosine Recall@5
|
| 95 |
-
- type: cosine_ndcg@10
|
| 96 |
-
value: 0.9497950442756341
|
| 97 |
-
name: Cosine Ndcg@10
|
| 98 |
-
- type: cosine_mrr@1
|
| 99 |
-
value: 0.903
|
| 100 |
-
name: Cosine Mrr@1
|
| 101 |
-
- type: cosine_mrr@5
|
| 102 |
-
value: 0.93429
|
| 103 |
-
name: Cosine Mrr@5
|
| 104 |
-
- type: cosine_mrr@10
|
| 105 |
-
value: 0.93595873015873
|
| 106 |
-
name: Cosine Mrr@10
|
| 107 |
-
- type: cosine_map@100
|
| 108 |
-
value: 0.9364845314523799
|
| 109 |
-
name: Cosine Map@100
|
| 110 |
---
|
| 111 |
|
| 112 |
# SentenceTransformer based on prajjwal1/bert-small
|
|
@@ -155,12 +85,12 @@ Then you can load this model and run inference.
|
|
| 155 |
from sentence_transformers import SentenceTransformer
|
| 156 |
|
| 157 |
# Download from the 🤗 Hub
|
| 158 |
-
model = SentenceTransformer("
|
| 159 |
# Run inference
|
| 160 |
sentences = [
|
| 161 |
-
'What
|
| 162 |
-
'
|
| 163 |
-
'
|
| 164 |
]
|
| 165 |
embeddings = model.encode(sentences)
|
| 166 |
print(embeddings.shape)
|
|
@@ -169,9 +99,9 @@ print(embeddings.shape)
|
|
| 169 |
# Get the similarity scores for the embeddings
|
| 170 |
similarities = model.similarity(embeddings, embeddings)
|
| 171 |
print(similarities)
|
| 172 |
-
# tensor([[1.0000,
|
| 173 |
-
# [
|
| 174 |
-
# [0.
|
| 175 |
```
|
| 176 |
|
| 177 |
<!--
|
|
@@ -198,32 +128,6 @@ You can finetune this model on your own dataset.
|
|
| 198 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 199 |
-->
|
| 200 |
|
| 201 |
-
## Evaluation
|
| 202 |
-
|
| 203 |
-
### Metrics
|
| 204 |
-
|
| 205 |
-
#### Information Retrieval
|
| 206 |
-
|
| 207 |
-
* Dataset: `val`
|
| 208 |
-
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
|
| 209 |
-
|
| 210 |
-
| Metric | Value |
|
| 211 |
-
|:-------------------|:-----------|
|
| 212 |
-
| cosine_accuracy@1 | 0.903 |
|
| 213 |
-
| cosine_accuracy@3 | 0.9652 |
|
| 214 |
-
| cosine_accuracy@5 | 0.9802 |
|
| 215 |
-
| cosine_precision@1 | 0.903 |
|
| 216 |
-
| cosine_precision@3 | 0.3217 |
|
| 217 |
-
| cosine_precision@5 | 0.196 |
|
| 218 |
-
| cosine_recall@1 | 0.903 |
|
| 219 |
-
| cosine_recall@3 | 0.9652 |
|
| 220 |
-
| cosine_recall@5 | 0.9802 |
|
| 221 |
-
| **cosine_ndcg@10** | **0.9498** |
|
| 222 |
-
| cosine_mrr@1 | 0.903 |
|
| 223 |
-
| cosine_mrr@5 | 0.9343 |
|
| 224 |
-
| cosine_mrr@10 | 0.936 |
|
| 225 |
-
| cosine_map@100 | 0.9365 |
|
| 226 |
-
|
| 227 |
<!--
|
| 228 |
## Bias, Risks and Limitations
|
| 229 |
|
|
@@ -242,45 +146,19 @@ You can finetune this model on your own dataset.
|
|
| 242 |
|
| 243 |
#### Unnamed Dataset
|
| 244 |
|
| 245 |
-
* Size:
|
| 246 |
-
* Columns: <code>
|
| 247 |
-
* Approximate statistics based on the first 1000 samples:
|
| 248 |
-
| | anchor | positive | negative |
|
| 249 |
-
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
|
| 250 |
-
| type | string | string | string |
|
| 251 |
-
| details | <ul><li>min: 6 tokens</li><li>mean: 15.63 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.77 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.2 tokens</li><li>max: 75 tokens</li></ul> |
|
| 252 |
-
* Samples:
|
| 253 |
-
| anchor | positive | negative |
|
| 254 |
-
|:---------------------------------------------------------|:---------------------------------------------------------|:----------------------------------------------------------------------------|
|
| 255 |
-
| <code>How long did it take to develop Pokémon GO?</code> | <code>How long did it take to develop Pokémon GO?</code> | <code>Can I take more than one gym in Pokémon GO?</code> |
|
| 256 |
-
| <code>How bad is 6/18 eyesight?</code> | <code>How bad is 6/18 eyesight?</code> | <code>How was bad eyesight dealt with in ancient and medieval times?</code> |
|
| 257 |
-
| <code>How can I do learn speaking English easily?</code> | <code>How can I learn speaking English easily?</code> | <code>How can English do learn speaking Ieasily?</code> |
|
| 258 |
-
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 259 |
-
```json
|
| 260 |
-
{
|
| 261 |
-
"scale": 20.0,
|
| 262 |
-
"similarity_fct": "cos_sim",
|
| 263 |
-
"gather_across_devices": false
|
| 264 |
-
}
|
| 265 |
-
```
|
| 266 |
-
|
| 267 |
-
### Evaluation Dataset
|
| 268 |
-
|
| 269 |
-
#### Unnamed Dataset
|
| 270 |
-
|
| 271 |
-
* Size: 5,000 evaluation samples
|
| 272 |
-
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
|
| 273 |
* Approximate statistics based on the first 1000 samples:
|
| 274 |
-
| |
|
| 275 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
|
| 276 |
| type | string | string | string |
|
| 277 |
-
| details | <ul><li>min: 6 tokens</li><li>mean: 15.
|
| 278 |
* Samples:
|
| 279 |
-
|
|
| 280 |
-
|
| 281 |
-
| <code>
|
| 282 |
-
| <code>
|
| 283 |
-
| <code>
|
| 284 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 285 |
```json
|
| 286 |
{
|
|
@@ -293,49 +171,36 @@ You can finetune this model on your own dataset.
|
|
| 293 |
### Training Hyperparameters
|
| 294 |
#### Non-Default Hyperparameters
|
| 295 |
|
| 296 |
-
- `
|
| 297 |
-
- `
|
| 298 |
-
- `per_device_eval_batch_size`: 256
|
| 299 |
-
- `learning_rate`: 2e-05
|
| 300 |
-
- `weight_decay`: 0.001
|
| 301 |
-
- `max_steps`: 1053
|
| 302 |
-
- `warmup_ratio`: 0.1
|
| 303 |
- `fp16`: True
|
| 304 |
-
- `
|
| 305 |
-
- `dataloader_num_workers`: 1
|
| 306 |
-
- `dataloader_prefetch_factor`: 1
|
| 307 |
-
- `load_best_model_at_end`: True
|
| 308 |
-
- `optim`: adamw_torch
|
| 309 |
-
- `ddp_find_unused_parameters`: False
|
| 310 |
-
- `push_to_hub`: True
|
| 311 |
-
- `hub_model_id`: redis/model-b-structured
|
| 312 |
-
- `eval_on_start`: True
|
| 313 |
|
| 314 |
#### All Hyperparameters
|
| 315 |
<details><summary>Click to expand</summary>
|
| 316 |
|
| 317 |
- `overwrite_output_dir`: False
|
| 318 |
- `do_predict`: False
|
| 319 |
-
- `eval_strategy`:
|
| 320 |
- `prediction_loss_only`: True
|
| 321 |
-
- `per_device_train_batch_size`:
|
| 322 |
-
- `per_device_eval_batch_size`:
|
| 323 |
- `per_gpu_train_batch_size`: None
|
| 324 |
- `per_gpu_eval_batch_size`: None
|
| 325 |
- `gradient_accumulation_steps`: 1
|
| 326 |
- `eval_accumulation_steps`: None
|
| 327 |
- `torch_empty_cache_steps`: None
|
| 328 |
-
- `learning_rate`:
|
| 329 |
-
- `weight_decay`: 0.
|
| 330 |
- `adam_beta1`: 0.9
|
| 331 |
- `adam_beta2`: 0.999
|
| 332 |
- `adam_epsilon`: 1e-08
|
| 333 |
-
- `max_grad_norm`: 1
|
| 334 |
-
- `num_train_epochs`: 3
|
| 335 |
-
- `max_steps`:
|
| 336 |
- `lr_scheduler_type`: linear
|
| 337 |
- `lr_scheduler_kwargs`: {}
|
| 338 |
-
- `warmup_ratio`: 0.
|
| 339 |
- `warmup_steps`: 0
|
| 340 |
- `log_level`: passive
|
| 341 |
- `log_level_replica`: warning
|
|
@@ -363,14 +228,14 @@ You can finetune this model on your own dataset.
|
|
| 363 |
- `tpu_num_cores`: None
|
| 364 |
- `tpu_metrics_debug`: False
|
| 365 |
- `debug`: []
|
| 366 |
-
- `dataloader_drop_last`:
|
| 367 |
-
- `dataloader_num_workers`:
|
| 368 |
-
- `dataloader_prefetch_factor`:
|
| 369 |
- `past_index`: -1
|
| 370 |
- `disable_tqdm`: False
|
| 371 |
- `remove_unused_columns`: True
|
| 372 |
- `label_names`: None
|
| 373 |
-
- `load_best_model_at_end`:
|
| 374 |
- `ignore_data_skip`: False
|
| 375 |
- `fsdp`: []
|
| 376 |
- `fsdp_min_num_params`: 0
|
|
@@ -380,23 +245,23 @@ You can finetune this model on your own dataset.
|
|
| 380 |
- `parallelism_config`: None
|
| 381 |
- `deepspeed`: None
|
| 382 |
- `label_smoothing_factor`: 0.0
|
| 383 |
-
- `optim`:
|
| 384 |
- `optim_args`: None
|
| 385 |
- `adafactor`: False
|
| 386 |
- `group_by_length`: False
|
| 387 |
- `length_column_name`: length
|
| 388 |
- `project`: huggingface
|
| 389 |
- `trackio_space_id`: trackio
|
| 390 |
-
- `ddp_find_unused_parameters`:
|
| 391 |
- `ddp_bucket_cap_mb`: None
|
| 392 |
- `ddp_broadcast_buffers`: False
|
| 393 |
- `dataloader_pin_memory`: True
|
| 394 |
- `dataloader_persistent_workers`: False
|
| 395 |
- `skip_memory_metrics`: True
|
| 396 |
- `use_legacy_prediction_loop`: False
|
| 397 |
-
- `push_to_hub`:
|
| 398 |
- `resume_from_checkpoint`: None
|
| 399 |
-
- `hub_model_id`:
|
| 400 |
- `hub_strategy`: every_save
|
| 401 |
- `hub_private_repo`: None
|
| 402 |
- `hub_always_push`: False
|
|
@@ -423,35 +288,32 @@ You can finetune this model on your own dataset.
|
|
| 423 |
- `neftune_noise_alpha`: None
|
| 424 |
- `optim_target_modules`: None
|
| 425 |
- `batch_eval_metrics`: False
|
| 426 |
-
- `eval_on_start`:
|
| 427 |
- `use_liger_kernel`: False
|
| 428 |
- `liger_kernel_config`: None
|
| 429 |
- `eval_use_gather_object`: False
|
| 430 |
- `average_tokens_across_devices`: True
|
| 431 |
- `prompts`: None
|
| 432 |
- `batch_sampler`: batch_sampler
|
| 433 |
-
- `multi_dataset_batch_sampler`:
|
| 434 |
- `router_mapping`: {}
|
| 435 |
- `learning_rate_mapping`: {}
|
| 436 |
|
| 437 |
</details>
|
| 438 |
|
| 439 |
### Training Logs
|
| 440 |
-
| Epoch
|
| 441 |
-
|
| 442 |
-
| 0
|
| 443 |
-
| 0.
|
| 444 |
-
| 0.
|
| 445 |
-
|
|
| 446 |
-
| 1.
|
| 447 |
-
| 1.
|
| 448 |
-
|
|
| 449 |
-
|
|
| 450 |
-
| 2.
|
| 451 |
-
|
| 452 |
-
| **2.849** | **1000** | **0.1234** | **0.0925** | **0.9498** |
|
| 453 |
-
|
| 454 |
-
* The bold row denotes the saved checkpoint.
|
| 455 |
|
| 456 |
### Framework Versions
|
| 457 |
- Python: 3.10.18
|
|
|
|
| 5 |
- feature-extraction
|
| 6 |
- dense
|
| 7 |
- generated_from_trainer
|
| 8 |
+
- dataset_size:100000
|
| 9 |
- loss:MultipleNegativesRankingLoss
|
| 10 |
base_model: prajjwal1/bert-small
|
| 11 |
widget:
|
| 12 |
+
- source_sentence: How do I calculate IQ?
|
| 13 |
sentences:
|
| 14 |
+
- What is the easiest way to know my IQ?
|
| 15 |
+
- How do I calculate not IQ ?
|
| 16 |
+
- What are some creative and innovative business ideas with less investment in India?
|
| 17 |
+
- source_sentence: How can I learn martial arts in my home?
|
| 18 |
sentences:
|
| 19 |
+
- How can I learn martial arts by myself?
|
| 20 |
+
- What are the advantages and disadvantages of investing in gold?
|
| 21 |
+
- Can people see that I have looked at their pictures on instagram if I am not following
|
| 22 |
+
them?
|
| 23 |
+
- source_sentence: When Enterprise picks you up do you have to take them back?
|
| 24 |
sentences:
|
| 25 |
+
- Are there any software Training institute in Tuticorin?
|
| 26 |
+
- When Enterprise picks you up do you have to take them back?
|
| 27 |
+
- When Enterprise picks you up do them have to take youback?
|
| 28 |
+
- source_sentence: What are some non-capital goods?
|
| 29 |
sentences:
|
| 30 |
+
- What are capital goods?
|
| 31 |
+
- How is the value of [math]\pi[/math] calculated?
|
| 32 |
+
- What are some non-capital goods?
|
| 33 |
+
- source_sentence: What is the QuickBooks technical support phone number in New York?
|
|
|
|
| 34 |
sentences:
|
| 35 |
+
- What caused the Great Depression?
|
| 36 |
+
- Can I apply for PR in Canada?
|
| 37 |
+
- Which is the best QuickBooks Hosting Support Number in New York?
|
|
|
|
|
|
|
| 38 |
pipeline_tag: sentence-similarity
|
| 39 |
library_name: sentence-transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
---
|
| 41 |
|
| 42 |
# SentenceTransformer based on prajjwal1/bert-small
|
|
|
|
| 85 |
from sentence_transformers import SentenceTransformer
|
| 86 |
|
| 87 |
# Download from the 🤗 Hub
|
| 88 |
+
model = SentenceTransformer("sentence_transformers_model_id")
|
| 89 |
# Run inference
|
| 90 |
sentences = [
|
| 91 |
+
'What is the QuickBooks technical support phone number in New York?',
|
| 92 |
+
'Which is the best QuickBooks Hosting Support Number in New York?',
|
| 93 |
+
'Can I apply for PR in Canada?',
|
| 94 |
]
|
| 95 |
embeddings = model.encode(sentences)
|
| 96 |
print(embeddings.shape)
|
|
|
|
| 99 |
# Get the similarity scores for the embeddings
|
| 100 |
similarities = model.similarity(embeddings, embeddings)
|
| 101 |
print(similarities)
|
| 102 |
+
# tensor([[1.0000, 0.8563, 0.0594],
|
| 103 |
+
# [0.8563, 1.0000, 0.1245],
|
| 104 |
+
# [0.0594, 0.1245, 1.0000]])
|
| 105 |
```
|
| 106 |
|
| 107 |
<!--
|
|
|
|
| 128 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 129 |
-->
|
| 130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
<!--
|
| 132 |
## Bias, Risks and Limitations
|
| 133 |
|
|
|
|
| 146 |
|
| 147 |
#### Unnamed Dataset
|
| 148 |
|
| 149 |
+
* Size: 100,000 training samples
|
| 150 |
+
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 151 |
* Approximate statistics based on the first 1000 samples:
|
| 152 |
+
| | sentence_0 | sentence_1 | sentence_2 |
|
| 153 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
|
| 154 |
| type | string | string | string |
|
| 155 |
+
| details | <ul><li>min: 6 tokens</li><li>mean: 15.79 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.68 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.37 tokens</li><li>max: 67 tokens</li></ul> |
|
| 156 |
* Samples:
|
| 157 |
+
| sentence_0 | sentence_1 | sentence_2 |
|
| 158 |
+
|:-----------------------------------------------------------------|:-----------------------------------------------------------------|:----------------------------------------------------------------------------------|
|
| 159 |
+
| <code>Is masturbating bad for boys?</code> | <code>Is masturbating bad for boys?</code> | <code>How harmful or unhealthy is masturbation?</code> |
|
| 160 |
+
| <code>Does a train engine move in reverse?</code> | <code>Does a train engine move in reverse?</code> | <code>Time moves forward, not in reverse. Doesn't that make time a vector?</code> |
|
| 161 |
+
| <code>What is the most badass thing anyone has ever done?</code> | <code>What is the most badass thing anyone has ever done?</code> | <code>anyone is the most badass thing Whathas ever done?</code> |
|
| 162 |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
|
| 163 |
```json
|
| 164 |
{
|
|
|
|
| 171 |
### Training Hyperparameters
|
| 172 |
#### Non-Default Hyperparameters
|
| 173 |
|
| 174 |
+
- `per_device_train_batch_size`: 64
|
| 175 |
+
- `per_device_eval_batch_size`: 64
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 176 |
- `fp16`: True
|
| 177 |
+
- `multi_dataset_batch_sampler`: round_robin
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 178 |
|
| 179 |
#### All Hyperparameters
|
| 180 |
<details><summary>Click to expand</summary>
|
| 181 |
|
| 182 |
- `overwrite_output_dir`: False
|
| 183 |
- `do_predict`: False
|
| 184 |
+
- `eval_strategy`: no
|
| 185 |
- `prediction_loss_only`: True
|
| 186 |
+
- `per_device_train_batch_size`: 64
|
| 187 |
+
- `per_device_eval_batch_size`: 64
|
| 188 |
- `per_gpu_train_batch_size`: None
|
| 189 |
- `per_gpu_eval_batch_size`: None
|
| 190 |
- `gradient_accumulation_steps`: 1
|
| 191 |
- `eval_accumulation_steps`: None
|
| 192 |
- `torch_empty_cache_steps`: None
|
| 193 |
+
- `learning_rate`: 5e-05
|
| 194 |
+
- `weight_decay`: 0.0
|
| 195 |
- `adam_beta1`: 0.9
|
| 196 |
- `adam_beta2`: 0.999
|
| 197 |
- `adam_epsilon`: 1e-08
|
| 198 |
+
- `max_grad_norm`: 1
|
| 199 |
+
- `num_train_epochs`: 3
|
| 200 |
+
- `max_steps`: -1
|
| 201 |
- `lr_scheduler_type`: linear
|
| 202 |
- `lr_scheduler_kwargs`: {}
|
| 203 |
+
- `warmup_ratio`: 0.0
|
| 204 |
- `warmup_steps`: 0
|
| 205 |
- `log_level`: passive
|
| 206 |
- `log_level_replica`: warning
|
|
|
|
| 228 |
- `tpu_num_cores`: None
|
| 229 |
- `tpu_metrics_debug`: False
|
| 230 |
- `debug`: []
|
| 231 |
+
- `dataloader_drop_last`: False
|
| 232 |
+
- `dataloader_num_workers`: 0
|
| 233 |
+
- `dataloader_prefetch_factor`: None
|
| 234 |
- `past_index`: -1
|
| 235 |
- `disable_tqdm`: False
|
| 236 |
- `remove_unused_columns`: True
|
| 237 |
- `label_names`: None
|
| 238 |
+
- `load_best_model_at_end`: False
|
| 239 |
- `ignore_data_skip`: False
|
| 240 |
- `fsdp`: []
|
| 241 |
- `fsdp_min_num_params`: 0
|
|
|
|
| 245 |
- `parallelism_config`: None
|
| 246 |
- `deepspeed`: None
|
| 247 |
- `label_smoothing_factor`: 0.0
|
| 248 |
+
- `optim`: adamw_torch_fused
|
| 249 |
- `optim_args`: None
|
| 250 |
- `adafactor`: False
|
| 251 |
- `group_by_length`: False
|
| 252 |
- `length_column_name`: length
|
| 253 |
- `project`: huggingface
|
| 254 |
- `trackio_space_id`: trackio
|
| 255 |
+
- `ddp_find_unused_parameters`: None
|
| 256 |
- `ddp_bucket_cap_mb`: None
|
| 257 |
- `ddp_broadcast_buffers`: False
|
| 258 |
- `dataloader_pin_memory`: True
|
| 259 |
- `dataloader_persistent_workers`: False
|
| 260 |
- `skip_memory_metrics`: True
|
| 261 |
- `use_legacy_prediction_loop`: False
|
| 262 |
+
- `push_to_hub`: False
|
| 263 |
- `resume_from_checkpoint`: None
|
| 264 |
+
- `hub_model_id`: None
|
| 265 |
- `hub_strategy`: every_save
|
| 266 |
- `hub_private_repo`: None
|
| 267 |
- `hub_always_push`: False
|
|
|
|
| 288 |
- `neftune_noise_alpha`: None
|
| 289 |
- `optim_target_modules`: None
|
| 290 |
- `batch_eval_metrics`: False
|
| 291 |
+
- `eval_on_start`: False
|
| 292 |
- `use_liger_kernel`: False
|
| 293 |
- `liger_kernel_config`: None
|
| 294 |
- `eval_use_gather_object`: False
|
| 295 |
- `average_tokens_across_devices`: True
|
| 296 |
- `prompts`: None
|
| 297 |
- `batch_sampler`: batch_sampler
|
| 298 |
+
- `multi_dataset_batch_sampler`: round_robin
|
| 299 |
- `router_mapping`: {}
|
| 300 |
- `learning_rate_mapping`: {}
|
| 301 |
|
| 302 |
</details>
|
| 303 |
|
| 304 |
### Training Logs
|
| 305 |
+
| Epoch | Step | Training Loss |
|
| 306 |
+
|:------:|:----:|:-------------:|
|
| 307 |
+
| 0.3199 | 500 | 0.4294 |
|
| 308 |
+
| 0.6398 | 1000 | 0.1268 |
|
| 309 |
+
| 0.9597 | 1500 | 0.1 |
|
| 310 |
+
| 1.2796 | 2000 | 0.0792 |
|
| 311 |
+
| 1.5995 | 2500 | 0.0706 |
|
| 312 |
+
| 1.9194 | 3000 | 0.0687 |
|
| 313 |
+
| 2.2393 | 3500 | 0.0584 |
|
| 314 |
+
| 2.5592 | 4000 | 0.057 |
|
| 315 |
+
| 2.8791 | 4500 | 0.0581 |
|
| 316 |
+
|
|
|
|
|
|
|
|
|
|
| 317 |
|
| 318 |
### Framework Versions
|
| 319 |
- Python: 3.10.18
|
eval/Information-Retrieval_evaluation_val_results.csv
CHANGED
|
@@ -2,3 +2,39 @@ epoch,steps,cosine-Accuracy@1,cosine-Accuracy@3,cosine-Accuracy@5,cosine-Precisi
|
|
| 2 |
0,0,0.7522,0.8716,0.8976,0.7522,0.7522,0.29053333333333337,0.8716,0.17951999999999999,0.8976,0.7522,0.8141766666666673,0.8179282539682551,0.8443897738734513,0.8201287535897707
|
| 3 |
1.4245014245014245,500,0.8984,0.9626,0.9796,0.8984,0.8984,0.3208666666666667,0.9626,0.19591999999999998,0.9796,0.8984,0.9308266666666667,0.9325202380952388,0.9471587549365493,0.9330120361154303
|
| 4 |
2.849002849002849,1000,0.903,0.9652,0.9802,0.903,0.903,0.32173333333333337,0.9652,0.19603999999999996,0.9802,0.903,0.93429,0.93595873015873,0.9497950442756341,0.9364845314523799
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
0,0,0.7522,0.8716,0.8976,0.7522,0.7522,0.29053333333333337,0.8716,0.17951999999999999,0.8976,0.7522,0.8141766666666673,0.8179282539682551,0.8443897738734513,0.8201287535897707
|
| 3 |
1.4245014245014245,500,0.8984,0.9626,0.9796,0.8984,0.8984,0.3208666666666667,0.9626,0.19591999999999998,0.9796,0.8984,0.9308266666666667,0.9325202380952388,0.9471587549365493,0.9330120361154303
|
| 4 |
2.849002849002849,1000,0.903,0.9652,0.9802,0.903,0.903,0.32173333333333337,0.9652,0.19603999999999996,0.9802,0.903,0.93429,0.93595873015873,0.9497950442756341,0.9364845314523799
|
| 5 |
+
0,0,0.752,0.8716,0.8976,0.752,0.752,0.29053333333333337,0.8716,0.17951999999999999,0.8976,0.752,0.8140766666666672,0.8178282539682551,0.8443159598241655,0.8200287780097951
|
| 6 |
+
0.2849002849002849,100,0.8028,0.9336,0.9606,0.8028,0.8028,0.3112,0.9336,0.19211999999999999,0.9606,0.8028,0.8707999999999997,0.8734792063492066,0.8999628282412199,0.8744365809026206
|
| 7 |
+
0.5698005698005698,200,0.8818,0.9486,0.969,0.8818,0.8818,0.3162,0.9486,0.19379999999999997,0.969,0.8818,0.9169700000000003,0.9188958730158733,0.9346227322971039,0.9197775668798819
|
| 8 |
+
0.8547008547008547,300,0.8932,0.9548,0.9722,0.8932,0.8932,0.3182666666666667,0.9548,0.19443999999999997,0.9722,0.8932,0.92455,0.9262189682539682,0.9405238086421854,0.927085472401337
|
| 9 |
+
1.1396011396011396,400,0.895,0.9592,0.9766,0.895,0.895,0.3197333333333333,0.9592,0.19532,0.9766,0.895,0.9275566666666664,0.9291465873015872,0.9436646680979195,0.9299204944081907
|
| 10 |
+
1.4245014245014245,500,0.8976,0.9608,0.9786,0.8976,0.8976,0.3202666666666667,0.9608,0.19571999999999998,0.9786,0.8976,0.9302033333333336,0.9319340476190481,0.9464840368155372,0.93250380336534
|
| 11 |
+
1.7094017094017095,600,0.9012,0.9624,0.9798,0.9012,0.9012,0.3208,0.9624,0.19596,0.9798,0.9012,0.9325766666666668,0.9341109523809528,0.9481312439186262,0.9346918726395761
|
| 12 |
+
1.9943019943019942,700,0.9026,0.9642,0.9806,0.9026,0.9026,0.3214,0.9642,0.19612,0.9806,0.9026,0.9338233333333336,0.9354094444444451,0.9494016714102691,0.9358979207167986
|
| 13 |
+
2.2792022792022792,800,0.9036,0.9636,0.9802,0.9036,0.9036,0.3212,0.9636,0.19603999999999996,0.9802,0.9036,0.9343600000000001,0.9360184126984125,0.9498159835882998,0.9365398459160087
|
| 14 |
+
2.564102564102564,900,0.9044,0.9646,0.9818,0.9044,0.9044,0.32153333333333334,0.9646,0.19635999999999998,0.9818,0.9044,0.9354066666666666,0.9368538888888887,0.950475801688231,0.9373901118360287
|
| 15 |
+
2.849002849002849,1000,0.9052,0.9666,0.9818,0.9052,0.9052,0.3222,0.9666,0.19635999999999998,0.9818,0.9052,0.9360233333333335,0.9375734920634928,0.9512084545091051,0.9380431058811802
|
| 16 |
+
3.133903133903134,1100,0.9058,0.9664,0.9814,0.9058,0.9058,0.3221333333333333,0.9664,0.19627999999999998,0.9814,0.9058,0.9363133333333331,0.9379010317460321,0.9514451621024324,0.9383630230369934
|
| 17 |
+
3.4188034188034186,1200,0.9074,0.9664,0.981,0.9074,0.9074,0.3221333333333333,0.9664,0.19619999999999999,0.981,0.9074,0.9370933333333331,0.9388146825396827,0.9522097571309723,0.9392568561033527
|
| 18 |
+
3.7037037037037037,1300,0.9072,0.9662,0.9824,0.9072,0.9072,0.3220666666666667,0.9662,0.19647999999999996,0.9824,0.9072,0.937483333333333,0.9389458730158728,0.9522473041371328,0.939431839170165
|
| 19 |
+
3.9886039886039883,1400,0.9074,0.9672,0.9814,0.9074,0.9074,0.3224,0.9672,0.19627999999999998,0.9814,0.9074,0.9374533333333331,0.939161507936508,0.9525437940729724,0.9396075466203089
|
| 20 |
+
4.273504273504273,1500,0.9092,0.9672,0.9822,0.9092,0.9092,0.3224,0.9672,0.19643999999999998,0.9822,0.9092,0.9384999999999997,0.9401135714285714,0.9533003039075331,0.9405341566811126
|
| 21 |
+
4.5584045584045585,1600,0.9096,0.9672,0.9822,0.9096,0.9096,0.3224,0.9672,0.19643999999999998,0.9822,0.9096,0.93891,0.9404971428571433,0.9535876607555805,0.9409278641122062
|
| 22 |
+
4.843304843304844,1700,0.9082,0.969,0.982,0.9082,0.9082,0.323,0.969,0.19639999999999996,0.982,0.9082,0.9384000000000002,0.9400878571428579,0.953392717309442,0.9404840470265015
|
| 23 |
+
5.128205128205128,1800,0.9094,0.968,0.9832,0.9094,0.9094,0.3226666666666667,0.968,0.19663999999999995,0.9832,0.9094,0.9391966666666669,0.9406966666666673,0.9539258798148182,0.9410781176707947
|
| 24 |
+
5.413105413105413,1900,0.911,0.9686,0.9826,0.911,0.911,0.3228666666666667,0.9686,0.19651999999999997,0.9826,0.911,0.939926666666667,0.9414888888888896,0.9544263660055258,0.9418952414141466
|
| 25 |
+
5.698005698005698,2000,0.9092,0.9686,0.9836,0.9092,0.9092,0.3228666666666667,0.9686,0.19671999999999998,0.9836,0.9092,0.9393733333333337,0.9407995238095244,0.9539831254763356,0.9411852477617797
|
| 26 |
+
5.982905982905983,2100,0.9094,0.969,0.9842,0.9094,0.9094,0.323,0.969,0.19684,0.9842,0.9094,0.93955,0.9408544444444449,0.9539826116358073,0.9412627373593819
|
| 27 |
+
6.267806267806268,2200,0.9106,0.969,0.9842,0.9106,0.9106,0.323,0.969,0.19683999999999996,0.9842,0.9106,0.9402733333333337,0.9415598412698417,0.9543896052297114,0.9420296751078869
|
| 28 |
+
6.552706552706553,2300,0.9108,0.9694,0.9836,0.9108,0.9108,0.32313333333333333,0.9694,0.19671999999999998,0.9836,0.9108,0.9403300000000001,0.9417396825396829,0.9545687842767867,0.942182426313767
|
| 29 |
+
6.837606837606837,2400,0.9112,0.9688,0.9838,0.9112,0.9112,0.32293333333333335,0.9688,0.19676,0.9838,0.9112,0.9404233333333335,0.9418663492063495,0.9547447641571389,0.9422939942002442
|
| 30 |
+
7.122507122507122,2500,0.9106,0.9696,0.9836,0.9106,0.9106,0.3232,0.9696,0.19671999999999998,0.9836,0.9106,0.9403600000000001,0.9417945238095239,0.9546595330687436,0.9422427863667379
|
| 31 |
+
7.407407407407407,2600,0.9106,0.9688,0.9846,0.9106,0.9106,0.32293333333333335,0.9688,0.19691999999999998,0.9846,0.9106,0.9403466666666666,0.9416591269841272,0.9546413408053799,0.9420736822388138
|
| 32 |
+
7.6923076923076925,2700,0.9106,0.9698,0.984,0.9106,0.9106,0.3232666666666667,0.9698,0.1968,0.984,0.9106,0.9403766666666669,0.9418299206349211,0.9548614738888802,0.942199023046597
|
| 33 |
+
7.977207977207978,2800,0.911,0.9692,0.984,0.911,0.911,0.3230666666666667,0.9692,0.1968,0.984,0.911,0.9405033333333332,0.9418522222222224,0.9546181138998727,0.9423265801462183
|
| 34 |
+
8.262108262108262,2900,0.9096,0.9696,0.9838,0.9096,0.9096,0.3232,0.9696,0.19675999999999996,0.9838,0.9096,0.9398633333333333,0.9413222222222224,0.9544037711181301,0.9417392466654881
|
| 35 |
+
8.547008547008547,3000,0.912,0.969,0.9844,0.912,0.912,0.323,0.969,0.19687999999999997,0.9844,0.912,0.9409866666666666,0.9423544444444447,0.9551243697761646,0.9427933736314108
|
| 36 |
+
8.831908831908832,3100,0.911,0.9688,0.984,0.911,0.911,0.32293333333333335,0.9688,0.19679999999999997,0.984,0.911,0.94048,0.9418774603174607,0.9547236289052001,0.9423332612869388
|
| 37 |
+
9.116809116809117,3200,0.9106,0.969,0.9838,0.9106,0.9106,0.323,0.969,0.19676,0.9838,0.9106,0.9404333333333332,0.9418641269841271,0.954723430874728,0.9423151289033547
|
| 38 |
+
9.401709401709402,3300,0.9106,0.969,0.984,0.9106,0.9106,0.323,0.969,0.19679999999999995,0.984,0.9106,0.9403533333333333,0.9417519841269842,0.9546344737257203,0.9422058505967718
|
| 39 |
+
9.686609686609687,3400,0.9106,0.969,0.9838,0.9106,0.9106,0.323,0.969,0.19675999999999996,0.9838,0.9106,0.94029,0.9417384126984129,0.9546651626751027,0.9421697455120135
|
| 40 |
+
9.971509971509972,3500,0.9104,0.9688,0.9842,0.9104,0.9104,0.32293333333333335,0.9688,0.19683999999999996,0.9842,0.9104,0.9402533333333333,0.9416303174603176,0.954585167414727,0.9420641228013908
|
final_metrics.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"val_cosine_accuracy@1": 0.908,
|
| 3 |
+
"val_cosine_accuracy@3": 0.9684,
|
| 4 |
+
"val_cosine_accuracy@5": 0.9834,
|
| 5 |
+
"val_cosine_precision@1": 0.908,
|
| 6 |
+
"val_cosine_precision@3": 0.3228,
|
| 7 |
+
"val_cosine_precision@5": 0.19667999999999997,
|
| 8 |
+
"val_cosine_recall@1": 0.908,
|
| 9 |
+
"val_cosine_recall@3": 0.9684,
|
| 10 |
+
"val_cosine_recall@5": 0.9834,
|
| 11 |
+
"val_cosine_ndcg@10": 0.9532296698470627,
|
| 12 |
+
"val_cosine_mrr@1": 0.908,
|
| 13 |
+
"val_cosine_mrr@5": 0.9386633333333337,
|
| 14 |
+
"val_cosine_mrr@10": 0.9400269841269848,
|
| 15 |
+
"val_cosine_map@100": 0.9404621256346036
|
| 16 |
+
}
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 114011616
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:45b4ce06b2a2d061461c9871a66396c951f2af0a1f2815786891af9f01b422cc
|
| 3 |
size 114011616
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 6161
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b6f4887f5ef478963a7e033106fee2bacd4dadbcb37a8042cfd307bbace16506
|
| 3 |
size 6161
|