training / README.md
aisuko's picture
aisuko/encoder-L6-V2
573158c verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4984
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: '<think>
Let’s think through this step by step
rp = 500/month
fp = 10/day
lp = 60/lesson
lpw = 2 lessons/week
yp = 12 months/year
total = (500 × 12) + (10 × 365) + (60 × 2 × 52)
total = 6000 + 3650 + 6240
total = 15890
</think>
\boxed{15890}'
sentences:
- '<think>
Let’s think through this step by step
rp = 500/month
fp = 10/day
lp = 60/lesson
tp = (500 × 12) + (10 × 365) + (60 × 2 × 52)
tp = 6000 + 3650 + 6240
tp = 15890
</think>
\boxed{15890}'
- '<think>
Let’s think through this step by step
h1 = 500 ft
h2 = 2 * h1 = 2 * 500 = 1000 ft
Total height = h1 + h2 = 500 + 1000 = 1500 ft
</think>
\boxed{1500}'
- '<think>
Let’s think through this step by step
fc = 200
lc = 500
dc = 500 × 0.8 = 400
nc = 200 - 50 = 150
tc = 150 + 400 = 550
</think>
\boxed{550}'
- source_sentence: '<think>
Let’s think through this step by step
s = 5 MB/song
m1 = 10 songs
m2 = 15 songs
m3 = 3 songs
total = (10 + 15 + 3) × 5
total = 28 × 5 = 140 MB
</think>
\boxed{140}'
sentences:
- '<think>
Let’s think through this step by step
s = 5 MB/song
m1 = 10 songs
m2 = 15 songs
m3 = 3 songs
Total memory = (10 + 15 + 3) × 5
Total memory = 28 × 5 = 140 MB
</think>
\boxed{140}'
- '<think>
Let’s think through this step by step
total_slices = 12
eaten_slices = 12 × (1/4) = 3
kept_slices = 12 - 3 = 9
</think>
\boxed{9}'
- '<think>
Let’s think through this step by step
Bald Mountain = 1.5 meters = 150 cm
Billy Mountain = 3.5 meters = 350 cm
Mount Pilot = 126 cm
Difference between Billy Mountain and Bald Mountain = 350 - 150 = 200 cm
Difference between Mount Pilot and Bald Mountain = 126 - 150 = -24 cm
</think>
\boxed{200}'
- source_sentence: '<think>
Let’s think through this step by step
Strength training: 1 hour/session × 3 sessions/week = 3 hours/week
Boxing training: 1.5 hours/session × 4 sessions/week = 6 hours/week
Total training hours per week = 3 + 6 = 9 hours/week
</think>
\boxed{9}'
sentences:
- '<think>
Let’s think through this step by step
t = 5
p = 4
s = 6
r = 5 / 2 = 2.5
income = (2 × 5) + 4 + (4 × 6) + (6 × 2.5)
income = 10 + 4 + 24 + 15
income = 53
</think>
\boxed{53}'
- '<think>
Let’s think through this step by step
Strength training: 1 hour/session × 3 sessions/week = 3 hours/week
Boxing training: 1.5 hours/session × 4 sessions/week = 6 hours/week
Total training hours per week = 3 + 6 = 9 hours/week
</think>
\boxed{9}'
- '<think>
Let’s think through this step by step
Let d be the number of ducks sold
Earnings from selling chickens = 5 × 8 = 40
Earnings from selling ducks = 10d
Total earnings = 40 + 10d
Amount spent on wheelbarrow = (1/2)(40 + 10d)
Amount earned from selling wheelbarrow = 60
40 + 10d = 2(60)
10d = 80
d = 8
</think>
\boxed{8}'
- source_sentence: '<think>
Let’s think through this step by step
up = 3 flights
down = 3 flights
total = (3 + 3) × 5 = 30 flights
</think>
\boxed{30}'
sentences:
- '<think>
Let’s think through this step by step
up = 3 flights
down = 3 flights
total = (3 + 3) × 5 = 30 flights
</think>
\boxed{30}'
- '<think>
Let’s think through this step by step
J = M + 10
J + 10 = 25
(M + 10) + 10 = 25
M + 20 = 25
M = 5
J = 5 + 10 = 15
Sum = 5 + 15 = 20
</think>
\boxed{20}'
- '<think>
Let’s think through this step by step
total_money = 7(1) + 4(5) + 2(10) + 1(20) = 7 + 20 + 20 + 20 = 67
cost = 67 - 4 = 63
pounds = 63 / 3 = 21
days = 7
average = 21 / 7 = 3
</think>
\boxed{3}'
- source_sentence: '<think>
Let’s think through this step by step
n = 4
sp = 75
t = 36
spt = 36 / 4 = 9
op = 75 + 9 = 84
</think>
\boxed{84}'
sentences:
- '<think>
Let’s think through this step by step
Let B be Benedict''s house size
K = 10000 sq ft
K = 4B + 600
10000 = 4B + 600
4B = 9400
B = 2350 sq ft
</think>
\boxed{2350}'
- '<think>
Let’s think through this step by step
Total throws = 80
No pass thrown = 30% of 80 = 0.3 × 80 = 24
Sacked for a loss = 0.5 × 24 = 12
</think>
\boxed{12}'
- '<think>
Let’s think through this step by step
n = 4
sp = 75
t = 36
spt = t / n = 36 / 4 = 9
op = sp + spt = 75 + 9 = 84
</think>
\boxed{84}'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.8793284892973376
name: Pearson Cosine
- type: spearman_cosine
value: 0.876484495899188
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.879334132854901
name: Pearson Cosine
- type: spearman_cosine
value: 0.8764936381058213
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- csv
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("training")
# Run inference
sentences = [
'<think>\nLet’s think through this step by step\nn = 4\nsp = 75\nt = 36\nspt = 36 / 4 = 9\nop = 75 + 9 = 84\n</think>\n\\boxed{84}',
'<think>\nLet’s think through this step by step\nn = 4\nsp = 75\nt = 36\nspt = t / n = 36 / 4 = 9\nop = sp + spt = 75 + 9 = 84\n</think>\n\\boxed{84}',
"<think>\nLet’s think through this step by step\nLet B be Benedict's house size\nK = 10000 sq ft\nK = 4B + 600\n10000 = 4B + 600\n4B = 9400\nB = 2350 sq ft\n</think>\n\\boxed{2350}",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts-dev` and `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts-dev | sts-test |
|:--------------------|:-----------|:-----------|
| pearson_cosine | 0.8793 | 0.8793 |
| **spearman_cosine** | **0.8765** | **0.8765** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### csv
* Dataset: csv
* Size: 4,984 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 66.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 67.03 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><think><br>Let’s think through this step by step<br>ht_hare = 14 inches<br>ht_camel = 24 × 14 = 336 inches<br>ht_camel_ft = 336 / 12 = 28 feet<br></think><br>\boxed{28}</code> | <code><think><br>Let’s think through this step by step<br>ht_hare = 14 inches<br>ht_camel = 24 * 14 = 336 inches<br>ht_camel_ft = 336 / 12 = 28 feet<br></think><br>\boxed{28}</code> |
| <code><think><br>Let’s think through this step by step<br>cpb = 5<br>sp = 90<br>pmpb = 2<br>tp = 5 × 5 = 25<br>tpmp = 5 × 2 = 10<br>profit = 90 - 25 - 10 = 55<br></think><br>\boxed{55}</code> | <code><think><br>Let’s think through this step by step<br>cp = 5 × 5 = 25<br>sp = 90<br>p = sp - cp - (5 × 2) = 90 - 25 - 10 = 55<br></think><br>\boxed{55}</code> |
| <code><think><br>Let’s think through this step by step<br>Time to iron a blouse = 15 minutes<br>Time to iron a dress = 20 minutes<br>Time spent on blouses = 2 hours = 120 minutes<br>Time spent on dresses = 3 hours = 180 minutes<br>Number of blouses ironed = 120 / 15 = 8<br>Number of dresses ironed = 180 / 20 = 9<br>Total pieces of clothes ironed = 8 + 9 = 17<br></think><br>\boxed{17}</code> | <code><think><br>Let’s think through this step by step<br>Time to iron 1 blouse = 15 minutes<br>Time to iron 1 dress = 20 minutes<br>Time spent on blouses = 2 hours = 120 minutes<br>Time spent on dresses = 3 hours = 180 minutes<br>Number of blouses ironed = 120 / 15 = 8<br>Number of dresses ironed = 180 / 20 = 9<br>Total pieces of clothes ironed = 8 + 9 = 17<br></think><br>\boxed{17}</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### csv
* Dataset: csv
* Size: 4,984 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 66.68 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 66.71 tokens</li><li>max: 161 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><think><br>Let’s think through this step by step<br>sg = 36<br>sc = 1/4<br>fl = 1/3<br>sg_left = sg - (sg × sc) - (sg × (1 - sc) × fl)<br>sg_left = 36 - (36 × 1/4) - (36 × (1 - 1/4) × 1/3)<br>sg_left = 36 - 9 - 6<br>sg_left = 21<br></think><br>\boxed{21}</code> | <code><think><br>Let’s think through this step by step<br>sg = 36<br>sc = 1/4<br>fl = 1/3<br>sg_left = sg - (sg × sc) - (sg × (1 - sc) × fl)<br>sg_left = 36 - (36 × 1/4) - (36 × (1 - 1/4) × 1/3)<br>sg_left = 36 - 9 - 6<br>sg_left = 21<br></think><br>\boxed{21}</code> |
| <code><think><br>Let’s think through this step by step<br>d1 = 125 miles<br>d2 = 223 miles<br>d3 = 493 - (125 + 223)<br>d3 = 145 miles<br></think><br>\boxed{145}</code> | <code><think><br>Let’s think through this step by step<br>d1 = 125 miles<br>d2 = 223 miles<br>d3 = 493 - (125 + 223)<br>d3 = 145 miles<br></think><br>\boxed{145}</code> |
| <code><think><br>Let’s think through this step by step<br>Total workdays = 2 weeks × 5 days/week = 10 days<br>Paid vacation days = 6 days<br>Unpaid vacation days = 10 - 6 = 4 days<br>Total pay = 15 × 8 = $120/day<br>Missed pay = 4 × 120 = $480<br></think><br>\boxed{480}</code> | <code><think><br>Let’s think through this step by step<br>Total workdays = 2 weeks × 5 days/week = 10 days<br>Paid vacation days = 6 days<br>Unpaid vacation days = 10 - 6 = 4 days<br>Total pay = 15 × 8 = $120/day<br>Missed pay = 4 × 120 = $480<br></think><br>\boxed{480}</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 20
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:-------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|
| -1 | -1 | - | - | 0.8671 | - |
| 0.3953 | 100 | 0.0422 | 0.0031 | 0.8701 | - |
| 0.7905 | 200 | 0.0105 | 0.0017 | 0.8727 | - |
| 1.1858 | 300 | 0.0041 | 0.0016 | 0.8728 | - |
| 1.5810 | 400 | 0.0016 | 0.0011 | 0.8730 | - |
| 1.9763 | 500 | 0.0039 | 0.0021 | 0.8731 | - |
| 2.3715 | 600 | 0.0014 | 0.0020 | 0.8741 | - |
| 2.7668 | 700 | 0.0014 | 0.0017 | 0.8744 | - |
| 3.1621 | 800 | 0.0019 | 0.0009 | 0.8742 | - |
| 3.5573 | 900 | 0.0012 | 0.0011 | 0.8754 | - |
| 3.9526 | 1000 | 0.0016 | 0.0015 | 0.8760 | - |
| 4.3478 | 1100 | 0.0021 | 0.0011 | 0.8763 | - |
| 4.7431 | 1200 | 0.0006 | 0.0009 | 0.8753 | - |
| 5.1383 | 1300 | 0.0004 | 0.0009 | 0.8753 | - |
| 5.5336 | 1400 | 0.0008 | 0.0008 | 0.8751 | - |
| 5.9289 | 1500 | 0.0004 | 0.0004 | 0.8743 | - |
| 6.3241 | 1600 | 0.0009 | 0.0008 | 0.8758 | - |
| 6.7194 | 1700 | 0.0005 | 0.0009 | 0.8747 | - |
| 7.1146 | 1800 | 0.0004 | 0.0006 | 0.8742 | - |
| 7.5099 | 1900 | 0.0003 | 0.0010 | 0.8748 | - |
| 7.9051 | 2000 | 0.0006 | 0.0008 | 0.8742 | - |
| 8.3004 | 2100 | 0.0005 | 0.0007 | 0.8744 | - |
| 8.6957 | 2200 | 0.0003 | 0.0006 | 0.8748 | - |
| 9.0909 | 2300 | 0.0005 | 0.0012 | 0.8749 | - |
| 9.4862 | 2400 | 0.0007 | 0.0006 | 0.8762 | - |
| 9.8814 | 2500 | 0.0003 | 0.0009 | 0.8762 | - |
| 10.2767 | 2600 | 0.0004 | 0.0007 | 0.8759 | - |
| 10.6719 | 2700 | 0.0005 | 0.0005 | 0.8760 | - |
| 11.0672 | 2800 | 0.0005 | 0.0007 | 0.8754 | - |
| 11.4625 | 2900 | 0.0002 | 0.0008 | 0.8749 | - |
| 11.8577 | 3000 | 0.0002 | 0.0007 | 0.8749 | - |
| 12.2530 | 3100 | 0.0003 | 0.0007 | 0.8752 | - |
| 12.6482 | 3200 | 0.0004 | 0.0008 | 0.8760 | - |
| 13.0435 | 3300 | 0.0002 | 0.0008 | 0.8767 | - |
| 13.4387 | 3400 | 0.0002 | 0.0007 | 0.8763 | - |
| 13.8340 | 3500 | 0.0002 | 0.0007 | 0.8763 | - |
| 14.2292 | 3600 | 0.0001 | 0.0007 | 0.8764 | - |
| 14.6245 | 3700 | 0.0003 | 0.0006 | 0.8765 | - |
| 15.0198 | 3800 | 0.0002 | 0.0005 | 0.8757 | - |
| 15.4150 | 3900 | 0.0002 | 0.0004 | 0.8760 | - |
| 15.8103 | 4000 | 0.0002 | 0.0005 | 0.8765 | - |
| 16.2055 | 4100 | 0.0002 | 0.0005 | 0.8757 | - |
| 16.6008 | 4200 | 0.0002 | 0.0006 | 0.8758 | - |
| 16.9960 | 4300 | 0.0002 | 0.0006 | 0.8758 | - |
| 17.3913 | 4400 | 0.0001 | 0.0005 | 0.8761 | - |
| 17.7866 | 4500 | 0.0002 | 0.0005 | 0.8765 | - |
| 18.1818 | 4600 | 0.0001 | 0.0005 | 0.8767 | - |
| 18.5771 | 4700 | 0.0004 | 0.0004 | 0.8765 | - |
| 18.9723 | 4800 | 0.0002 | 0.0004 | 0.8765 | - |
| 19.3676 | 4900 | 0.0001 | 0.0004 | 0.8765 | - |
| 19.7628 | 5000 | 0.0001 | 0.0004 | 0.8765 | - |
| -1 | -1 | - | - | - | 0.8765 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.51.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.5.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->