--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4984 - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: ' Let’s think through this step by step rp = 500/month fp = 10/day lp = 60/lesson lpw = 2 lessons/week yp = 12 months/year total = (500 × 12) + (10 × 365) + (60 × 2 × 52) total = 6000 + 3650 + 6240 total = 15890 \boxed{15890}' sentences: - ' Let’s think through this step by step rp = 500/month fp = 10/day lp = 60/lesson tp = (500 × 12) + (10 × 365) + (60 × 2 × 52) tp = 6000 + 3650 + 6240 tp = 15890 \boxed{15890}' - ' Let’s think through this step by step h1 = 500 ft h2 = 2 * h1 = 2 * 500 = 1000 ft Total height = h1 + h2 = 500 + 1000 = 1500 ft \boxed{1500}' - ' Let’s think through this step by step fc = 200 lc = 500 dc = 500 × 0.8 = 400 nc = 200 - 50 = 150 tc = 150 + 400 = 550 \boxed{550}' - source_sentence: ' Let’s think through this step by step s = 5 MB/song m1 = 10 songs m2 = 15 songs m3 = 3 songs total = (10 + 15 + 3) × 5 total = 28 × 5 = 140 MB \boxed{140}' sentences: - ' Let’s think through this step by step s = 5 MB/song m1 = 10 songs m2 = 15 songs m3 = 3 songs Total memory = (10 + 15 + 3) × 5 Total memory = 28 × 5 = 140 MB \boxed{140}' - ' Let’s think through this step by step total_slices = 12 eaten_slices = 12 × (1/4) = 3 kept_slices = 12 - 3 = 9 \boxed{9}' - ' Let’s think through this step by step Bald Mountain = 1.5 meters = 150 cm Billy Mountain = 3.5 meters = 350 cm Mount Pilot = 126 cm Difference between Billy Mountain and Bald Mountain = 350 - 150 = 200 cm Difference between Mount Pilot and Bald Mountain = 126 - 150 = -24 cm \boxed{200}' - source_sentence: ' Let’s think through this step by step Strength training: 1 hour/session × 3 sessions/week = 3 hours/week Boxing training: 1.5 hours/session × 4 sessions/week = 6 hours/week Total training hours per week = 3 + 6 = 9 hours/week \boxed{9}' sentences: - ' Let’s think through this step by step t = 5 p = 4 s = 6 r = 5 / 2 = 2.5 income = (2 × 5) + 4 + (4 × 6) + (6 × 2.5) income = 10 + 4 + 24 + 15 income = 53 \boxed{53}' - ' Let’s think through this step by step Strength training: 1 hour/session × 3 sessions/week = 3 hours/week Boxing training: 1.5 hours/session × 4 sessions/week = 6 hours/week Total training hours per week = 3 + 6 = 9 hours/week \boxed{9}' - ' Let’s think through this step by step Let d be the number of ducks sold Earnings from selling chickens = 5 × 8 = 40 Earnings from selling ducks = 10d Total earnings = 40 + 10d Amount spent on wheelbarrow = (1/2)(40 + 10d) Amount earned from selling wheelbarrow = 60 40 + 10d = 2(60) 10d = 80 d = 8 \boxed{8}' - source_sentence: ' Let’s think through this step by step up = 3 flights down = 3 flights total = (3 + 3) × 5 = 30 flights \boxed{30}' sentences: - ' Let’s think through this step by step up = 3 flights down = 3 flights total = (3 + 3) × 5 = 30 flights \boxed{30}' - ' Let’s think through this step by step J = M + 10 J + 10 = 25 (M + 10) + 10 = 25 M + 20 = 25 M = 5 J = 5 + 10 = 15 Sum = 5 + 15 = 20 \boxed{20}' - ' Let’s think through this step by step total_money = 7(1) + 4(5) + 2(10) + 1(20) = 7 + 20 + 20 + 20 = 67 cost = 67 - 4 = 63 pounds = 63 / 3 = 21 days = 7 average = 21 / 7 = 3 \boxed{3}' - source_sentence: ' Let’s think through this step by step n = 4 sp = 75 t = 36 spt = 36 / 4 = 9 op = 75 + 9 = 84 \boxed{84}' sentences: - ' Let’s think through this step by step Let B be Benedict''s house size K = 10000 sq ft K = 4B + 600 10000 = 4B + 600 4B = 9400 B = 2350 sq ft \boxed{2350}' - ' Let’s think through this step by step Total throws = 80 No pass thrown = 30% of 80 = 0.3 × 80 = 24 Sacked for a loss = 0.5 × 24 = 12 \boxed{12}' - ' Let’s think through this step by step n = 4 sp = 75 t = 36 spt = t / n = 36 / 4 = 9 op = sp + spt = 75 + 9 = 84 \boxed{84}' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.8793284892973376 name: Pearson Cosine - type: spearman_cosine value: 0.876484495899188 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.879334132854901 name: Pearson Cosine - type: spearman_cosine value: 0.8764936381058213 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("training") # Run inference sentences = [ '\nLet’s think through this step by step\nn = 4\nsp = 75\nt = 36\nspt = 36 / 4 = 9\nop = 75 + 9 = 84\n\n\\boxed{84}', '\nLet’s think through this step by step\nn = 4\nsp = 75\nt = 36\nspt = t / n = 36 / 4 = 9\nop = sp + spt = 75 + 9 = 84\n\n\\boxed{84}', "\nLet’s think through this step by step\nLet B be Benedict's house size\nK = 10000 sq ft\nK = 4B + 600\n10000 = 4B + 600\n4B = 9400\nB = 2350 sq ft\n\n\\boxed{2350}", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-dev` and `sts-test` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-dev | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.8793 | 0.8793 | | **spearman_cosine** | **0.8765** | **0.8765** | ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 4,984 training samples * Columns: anchor and positive * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
Let’s think through this step by step
ht_hare = 14 inches
ht_camel = 24 × 14 = 336 inches
ht_camel_ft = 336 / 12 = 28 feet

\boxed{28}
|
Let’s think through this step by step
ht_hare = 14 inches
ht_camel = 24 * 14 = 336 inches
ht_camel_ft = 336 / 12 = 28 feet

\boxed{28}
| |
Let’s think through this step by step
cpb = 5
sp = 90
pmpb = 2
tp = 5 × 5 = 25
tpmp = 5 × 2 = 10
profit = 90 - 25 - 10 = 55

\boxed{55}
|
Let’s think through this step by step
cp = 5 × 5 = 25
sp = 90
p = sp - cp - (5 × 2) = 90 - 25 - 10 = 55

\boxed{55}
| |
Let’s think through this step by step
Time to iron a blouse = 15 minutes
Time to iron a dress = 20 minutes
Time spent on blouses = 2 hours = 120 minutes
Time spent on dresses = 3 hours = 180 minutes
Number of blouses ironed = 120 / 15 = 8
Number of dresses ironed = 180 / 20 = 9
Total pieces of clothes ironed = 8 + 9 = 17

\boxed{17}
|
Let’s think through this step by step
Time to iron 1 blouse = 15 minutes
Time to iron 1 dress = 20 minutes
Time spent on blouses = 2 hours = 120 minutes
Time spent on dresses = 3 hours = 180 minutes
Number of blouses ironed = 120 / 15 = 8
Number of dresses ironed = 180 / 20 = 9
Total pieces of clothes ironed = 8 + 9 = 17

\boxed{17}
| * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### csv * Dataset: csv * Size: 4,984 evaluation samples * Columns: anchor and positive * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
Let’s think through this step by step
sg = 36
sc = 1/4
fl = 1/3
sg_left = sg - (sg × sc) - (sg × (1 - sc) × fl)
sg_left = 36 - (36 × 1/4) - (36 × (1 - 1/4) × 1/3)
sg_left = 36 - 9 - 6
sg_left = 21

\boxed{21}
|
Let’s think through this step by step
sg = 36
sc = 1/4
fl = 1/3
sg_left = sg - (sg × sc) - (sg × (1 - sc) × fl)
sg_left = 36 - (36 × 1/4) - (36 × (1 - 1/4) × 1/3)
sg_left = 36 - 9 - 6
sg_left = 21

\boxed{21}
| |
Let’s think through this step by step
d1 = 125 miles
d2 = 223 miles
d3 = 493 - (125 + 223)
d3 = 145 miles

\boxed{145}
|
Let’s think through this step by step
d1 = 125 miles
d2 = 223 miles
d3 = 493 - (125 + 223)
d3 = 145 miles

\boxed{145}
| |
Let’s think through this step by step
Total workdays = 2 weeks × 5 days/week = 10 days
Paid vacation days = 6 days
Unpaid vacation days = 10 - 6 = 4 days
Total pay = 15 × 8 = $120/day
Missed pay = 4 × 120 = $480

\boxed{480}
|
Let’s think through this step by step
Total workdays = 2 weeks × 5 days/week = 10 days
Paid vacation days = 6 days
Unpaid vacation days = 10 - 6 = 4 days
Total pay = 15 × 8 = $120/day
Missed pay = 4 × 120 = $480

\boxed{480}
| * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 20 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 20 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | Validation Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:-------:|:----:|:-------------:|:---------------:|:-----------------------:|:------------------------:| | -1 | -1 | - | - | 0.8671 | - | | 0.3953 | 100 | 0.0422 | 0.0031 | 0.8701 | - | | 0.7905 | 200 | 0.0105 | 0.0017 | 0.8727 | - | | 1.1858 | 300 | 0.0041 | 0.0016 | 0.8728 | - | | 1.5810 | 400 | 0.0016 | 0.0011 | 0.8730 | - | | 1.9763 | 500 | 0.0039 | 0.0021 | 0.8731 | - | | 2.3715 | 600 | 0.0014 | 0.0020 | 0.8741 | - | | 2.7668 | 700 | 0.0014 | 0.0017 | 0.8744 | - | | 3.1621 | 800 | 0.0019 | 0.0009 | 0.8742 | - | | 3.5573 | 900 | 0.0012 | 0.0011 | 0.8754 | - | | 3.9526 | 1000 | 0.0016 | 0.0015 | 0.8760 | - | | 4.3478 | 1100 | 0.0021 | 0.0011 | 0.8763 | - | | 4.7431 | 1200 | 0.0006 | 0.0009 | 0.8753 | - | | 5.1383 | 1300 | 0.0004 | 0.0009 | 0.8753 | - | | 5.5336 | 1400 | 0.0008 | 0.0008 | 0.8751 | - | | 5.9289 | 1500 | 0.0004 | 0.0004 | 0.8743 | - | | 6.3241 | 1600 | 0.0009 | 0.0008 | 0.8758 | - | | 6.7194 | 1700 | 0.0005 | 0.0009 | 0.8747 | - | | 7.1146 | 1800 | 0.0004 | 0.0006 | 0.8742 | - | | 7.5099 | 1900 | 0.0003 | 0.0010 | 0.8748 | - | | 7.9051 | 2000 | 0.0006 | 0.0008 | 0.8742 | - | | 8.3004 | 2100 | 0.0005 | 0.0007 | 0.8744 | - | | 8.6957 | 2200 | 0.0003 | 0.0006 | 0.8748 | - | | 9.0909 | 2300 | 0.0005 | 0.0012 | 0.8749 | - | | 9.4862 | 2400 | 0.0007 | 0.0006 | 0.8762 | - | | 9.8814 | 2500 | 0.0003 | 0.0009 | 0.8762 | - | | 10.2767 | 2600 | 0.0004 | 0.0007 | 0.8759 | - | | 10.6719 | 2700 | 0.0005 | 0.0005 | 0.8760 | - | | 11.0672 | 2800 | 0.0005 | 0.0007 | 0.8754 | - | | 11.4625 | 2900 | 0.0002 | 0.0008 | 0.8749 | - | | 11.8577 | 3000 | 0.0002 | 0.0007 | 0.8749 | - | | 12.2530 | 3100 | 0.0003 | 0.0007 | 0.8752 | - | | 12.6482 | 3200 | 0.0004 | 0.0008 | 0.8760 | - | | 13.0435 | 3300 | 0.0002 | 0.0008 | 0.8767 | - | | 13.4387 | 3400 | 0.0002 | 0.0007 | 0.8763 | - | | 13.8340 | 3500 | 0.0002 | 0.0007 | 0.8763 | - | | 14.2292 | 3600 | 0.0001 | 0.0007 | 0.8764 | - | | 14.6245 | 3700 | 0.0003 | 0.0006 | 0.8765 | - | | 15.0198 | 3800 | 0.0002 | 0.0005 | 0.8757 | - | | 15.4150 | 3900 | 0.0002 | 0.0004 | 0.8760 | - | | 15.8103 | 4000 | 0.0002 | 0.0005 | 0.8765 | - | | 16.2055 | 4100 | 0.0002 | 0.0005 | 0.8757 | - | | 16.6008 | 4200 | 0.0002 | 0.0006 | 0.8758 | - | | 16.9960 | 4300 | 0.0002 | 0.0006 | 0.8758 | - | | 17.3913 | 4400 | 0.0001 | 0.0005 | 0.8761 | - | | 17.7866 | 4500 | 0.0002 | 0.0005 | 0.8765 | - | | 18.1818 | 4600 | 0.0001 | 0.0005 | 0.8767 | - | | 18.5771 | 4700 | 0.0004 | 0.0004 | 0.8765 | - | | 18.9723 | 4800 | 0.0002 | 0.0004 | 0.8765 | - | | 19.3676 | 4900 | 0.0001 | 0.0004 | 0.8765 | - | | 19.7628 | 5000 | 0.0001 | 0.0004 | 0.8765 | - | | -1 | -1 | - | - | - | 0.8765 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.51.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.5.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```