--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:705905 - loss:MultipleNegativesSymmetricRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: gerber baby food fruits apples bananas & cereal sentences: - world of sweets puzzle - baby food - baby food - source_sentence: granville original one bite original rice crispy squares sentences: - ' one bite rice crispy ' - sweet - bounty wafer rolls - source_sentence: rosa / porcelain us andalusia mug sentences: - mug - ' rosa mug' - melamine small plate - teal - source_sentence: cetaphil sunscreen spf 50+ cream 89 ml sentences: - sunscreen - ' cetaphil sunscreen cream' - garnier intensity (6.60) intense ruby - source_sentence: italian dolce provolone sentences: - trident - gum strawberry flavor - 5 per pack - experience the authentic taste of italy with our italian dolce provolone. indulge in its creamy texture, delicate flavors, and versatility in both simple and sophisticated culinary creations. - dairy pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: triplet name: Triplet dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy value: 0.9646650552749634 name: Cosine Accuracy --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("LamaDiab/MiniLM-V18Data-256ConstantBATCH-SemanticEngine") # Run inference sentences = [ 'italian dolce provolone', 'experience the authentic taste of italy with our italian dolce provolone. indulge in its creamy texture, delicate flavors, and versatility in both simple and sophisticated culinary creations.', 'trident - gum strawberry flavor - 5 per pack', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.8581, 0.2671], # [0.8581, 1.0000, 0.2847], # [0.2671, 0.2847, 1.0000]]) ``` ## Evaluation ### Metrics #### Triplet * Evaluated with [TripletEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9647** | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 705,905 training samples * Columns: anchor, positive, and itemCategory * Approximate statistics based on the first 1000 samples: | | anchor | positive | itemCategory | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Samples: | anchor | positive | itemCategory | |:-----------------------------------------------|:-----------------------------------------|:-------------------------------| | mango nos nos small | milk chocolate ganache cake | sweet | | lux soap creamy perfection 165 gm | soap | hand soap | | grey deo original | classic deodrant | women's deodorant | * Loss: [MultipleNegativesSymmetricRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 9,509 evaluation samples * Columns: anchor, positive, negative, and itemCategory * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | itemCategory | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | string | | details | | | | | * Samples: | anchor | positive | negative | itemCategory | |:---------------------------------------------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------|:------------------------------------| | pilot mechanical pencil progrex h-127 - 0.7 mm | office supplies | scary halloween skull mask | pencil | | superior drawing marker -pen - set of 12 colors - 2 nib | superior | coloring and writing book 21 x 29.7 cm 100 gsm 18 pages number subtraction ma4014 | marker | | first person singular author: haruki murakami | haruki murakami book | buried secrets | literature and fiction | * Loss: [MultipleNegativesSymmetricRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `num_train_epochs`: 6 - `warmup_ratio`: 0.2 - `fp16`: True - `dataloader_num_workers`: 1 - `dataloader_prefetch_factor`: 2 - `dataloader_persistent_workers`: True - `push_to_hub`: True - `hub_model_id`: LamaDiab/MiniLM-V18Data-256ConstantBATCH-SemanticEngine - `hub_strategy`: all_checkpoints #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 6 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 1 - `dataloader_prefetch_factor`: 2 - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: True - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: LamaDiab/MiniLM-V18Data-256ConstantBATCH-SemanticEngine - `hub_strategy`: all_checkpoints - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {}
### Training Logs | Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | |:------:|:-----:|:-------------:|:---------------:|:---------------:| | 0.0004 | 1 | 4.1707 | - | - | | 0.3626 | 1000 | 3.7074 | 0.5848 | 0.9430 | | 0.7252 | 2000 | 2.5733 | 0.5230 | 0.9468 | | 1.0877 | 3000 | 2.1499 | 0.4858 | 0.9546 | | 1.4503 | 4000 | 2.3929 | 0.4693 | 0.9578 | | 1.8129 | 5000 | 1.6541 | 0.4415 | 0.9597 | | 2.1755 | 6000 | 1.8335 | 0.4474 | 0.9615 | | 2.5381 | 7000 | 1.839 | 0.4331 | 0.9625 | | 2.9007 | 8000 | 1.3238 | 0.4197 | 0.9624 | | 3.2632 | 9000 | 1.8409 | 0.4281 | 0.9647 | | 3.6258 | 10000 | 1.511 | 0.4207 | 0.9653 | | 3.9884 | 11000 | 1.1623 | 0.4108 | 0.9647 | ### Framework Versions - Python: 3.11.13 - Sentence Transformers: 5.1.2 - Transformers: 4.53.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.9.0 - Datasets: 4.4.1 - Tokenizers: 0.21.2 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```