| | ---
|
| | tags:
|
| | - sentence-transformers
|
| | - sentence-similarity
|
| | - feature-extraction
|
| | - dense
|
| | - generated_from_trainer
|
| | - dataset_size:227518
|
| | - loss:MatryoshkaLoss
|
| | - loss:MultipleNegativesRankingLoss
|
| | base_model: sentence-transformers/all-MiniLM-L6-v2
|
| | widget:
|
| | - source_sentence: UTU
|
| | sentences:
|
| | - < HOSIER, person who sells stockings, etc [n]
|
| | - act of speaking foolishly [n]
|
| | - reward [n]
|
| | - source_sentence: PROEMS
|
| | sentences:
|
| | - < PROEM, introduction or preface [n]
|
| | - edge of a sea or lake [n] / prop or support [v]
|
| | - wad (black earthy ore of manganese) [n]
|
| | - source_sentence: INSTITUTORS
|
| | sentences:
|
| | - < INSTITUTOR, one who institutes [n]
|
| | - assembly of judges [n]
|
| | - < FATE, power supposed to predetermine events [n]
|
| | - source_sentence: HAEMAGOGUES
|
| | sentences:
|
| | - < VIVISECTORIUM, a place for vivisection [n]
|
| | - < GROTESQUE, strangely distorted [adj]
|
| | - < HAEMAGOGUE, a drug that promotes the flow of blood [n]
|
| | - source_sentence: BOLDING
|
| | sentences:
|
| | - < NAUCH, nautch (intricate traditional Indian dance) [n]
|
| | - < TABU, taboo (prohibition resulting from religious or social conventions) [n]
|
| | - < BOLD, confident and fearless [adj]
|
| | pipeline_tag: sentence-similarity
|
| | library_name: sentence-transformers
|
| | metrics:
|
| | - cosine_accuracy@1
|
| | - cosine_accuracy@3
|
| | - cosine_accuracy@5
|
| | - cosine_accuracy@10
|
| | - cosine_precision@1
|
| | - cosine_precision@3
|
| | - cosine_precision@5
|
| | - cosine_precision@10
|
| | - cosine_recall@1
|
| | - cosine_recall@3
|
| | - cosine_recall@5
|
| | - cosine_recall@10
|
| | - cosine_ndcg@10
|
| | - cosine_mrr@10
|
| | - cosine_map@100
|
| | model-index:
|
| | - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
|
| | results:
|
| | - task:
|
| | type: information-retrieval
|
| | name: Information Retrieval
|
| | dataset:
|
| | name: dictionary test
|
| | type: dictionary-test
|
| | metrics:
|
| | - type: cosine_accuracy@1
|
| | value: 0.5970332278481013
|
| | name: Cosine Accuracy@1
|
| | - type: cosine_accuracy@3
|
| | value: 0.7252768987341772
|
| | name: Cosine Accuracy@3
|
| | - type: cosine_accuracy@5
|
| | value: 0.7495648734177215
|
| | name: Cosine Accuracy@5
|
| | - type: cosine_accuracy@10
|
| | value: 0.7743275316455697
|
| | name: Cosine Accuracy@10
|
| | - type: cosine_precision@1
|
| | value: 0.5970332278481013
|
| | name: Cosine Precision@1
|
| | - type: cosine_precision@3
|
| | value: 0.2417589662447257
|
| | name: Cosine Precision@3
|
| | - type: cosine_precision@5
|
| | value: 0.14991297468354428
|
| | name: Cosine Precision@5
|
| | - type: cosine_precision@10
|
| | value: 0.07743275316455696
|
| | name: Cosine Precision@10
|
| | - type: cosine_recall@1
|
| | value: 0.5970332278481013
|
| | name: Cosine Recall@1
|
| | - type: cosine_recall@3
|
| | value: 0.7252768987341772
|
| | name: Cosine Recall@3
|
| | - type: cosine_recall@5
|
| | value: 0.7495648734177215
|
| | name: Cosine Recall@5
|
| | - type: cosine_recall@10
|
| | value: 0.7743275316455697
|
| | name: Cosine Recall@10
|
| | - type: cosine_ndcg@10
|
| | value: 0.6919377177591847
|
| | name: Cosine Ndcg@10
|
| | - type: cosine_mrr@10
|
| | value: 0.6648749560478296
|
| | name: Cosine Mrr@10
|
| | - type: cosine_map@100
|
| | value: 0.6677242431561833
|
| | name: Cosine Map@100
|
| | ---
|
| |
|
| | # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
|
| |
|
| | This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
|
| |
|
| | ## Model Details
|
| |
|
| | ### Model Description
|
| | - **Model Type:** Sentence Transformer
|
| | - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
|
| | - **Maximum Sequence Length:** 256 tokens
|
| | - **Output Dimensionality:** 384 dimensions
|
| | - **Similarity Function:** Cosine Similarity
|
| | - **Training Dataset:**
|
| | - csv
|
| | <!-- - **Language:** Unknown -->
|
| | <!-- - **License:** Unknown -->
|
| |
|
| | ### Model Sources
|
| |
|
| | - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| | - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
|
| | - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
| |
|
| | ### Full Model Architecture
|
| |
|
| | ```
|
| | SentenceTransformer(
|
| | (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
|
| | (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
| | (2): Normalize()
|
| | )
|
| | ```
|
| |
|
| | ## Usage
|
| |
|
| | ### Direct Usage (Sentence Transformers)
|
| |
|
| | First install the Sentence Transformers library:
|
| |
|
| | ```bash
|
| | pip install -U sentence-transformers
|
| | ```
|
| |
|
| | Then you can load this model and run inference.
|
| | ```python
|
| | from sentence_transformers import SentenceTransformer
|
| |
|
| | # Download from the 🤗 Hub
|
| | model = SentenceTransformer("Mehularora/scrabble-embed-v1")
|
| | # Run inference
|
| | sentences = [
|
| | 'BOLDING',
|
| | '< BOLD, confident and fearless [adj]',
|
| | '< NAUCH, nautch (intricate traditional Indian dance) [n]',
|
| | ]
|
| | embeddings = model.encode(sentences)
|
| | print(embeddings.shape)
|
| | # [3, 384]
|
| |
|
| | # Get the similarity scores for the embeddings
|
| | similarities = model.similarity(embeddings, embeddings)
|
| | print(similarities)
|
| | # tensor([[1.0000, 0.7391, 0.0112],
|
| | # [0.7391, 1.0000, 0.0722],
|
| | # [0.0112, 0.0722, 1.0000]])
|
| | ```
|
| |
|
| | <!--
|
| | ### Direct Usage (Transformers)
|
| |
|
| | <details><summary>Click to see the direct usage in Transformers</summary>
|
| |
|
| | </details>
|
| | -->
|
| |
|
| | <!--
|
| | ### Downstream Usage (Sentence Transformers)
|
| |
|
| | You can finetune this model on your own dataset.
|
| |
|
| | <details><summary>Click to expand</summary>
|
| |
|
| | </details>
|
| | -->
|
| |
|
| | <!--
|
| | ### Out-of-Scope Use
|
| |
|
| | *List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| | -->
|
| |
|
| | ## Evaluation
|
| |
|
| | ### Metrics
|
| |
|
| | #### Information Retrieval
|
| |
|
| | * Dataset: `dictionary-test`
|
| | * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
|
| |
|
| | | Metric | Value |
|
| | |:--------------------|:-----------|
|
| | | cosine_accuracy@1 | 0.597 |
|
| | | cosine_accuracy@3 | 0.7253 |
|
| | | cosine_accuracy@5 | 0.7496 |
|
| | | cosine_accuracy@10 | 0.7743 |
|
| | | cosine_precision@1 | 0.597 |
|
| | | cosine_precision@3 | 0.2418 |
|
| | | cosine_precision@5 | 0.1499 |
|
| | | cosine_precision@10 | 0.0774 |
|
| | | cosine_recall@1 | 0.597 |
|
| | | cosine_recall@3 | 0.7253 |
|
| | | cosine_recall@5 | 0.7496 |
|
| | | cosine_recall@10 | 0.7743 |
|
| | | **cosine_ndcg@10** | **0.6919** |
|
| | | cosine_mrr@10 | 0.6649 |
|
| | | cosine_map@100 | 0.6677 |
|
| |
|
| | <!--
|
| | ## Bias, Risks and Limitations
|
| |
|
| | *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
| | -->
|
| |
|
| | <!--
|
| | ### Recommendations
|
| |
|
| | *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
| | -->
|
| |
|
| | ## Training Details
|
| |
|
| | ### Training Dataset
|
| |
|
| | #### csv
|
| |
|
| | * Dataset: csv
|
| | * Size: 227,518 training samples
|
| | * Columns: <code>word</code> and <code>definition</code>
|
| | * Approximate statistics based on the first 1000 samples:
|
| | | | word | definition |
|
| | |:--------|:-------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
|
| | | type | string | string |
|
| | | details | <ul><li>min: 3 tokens</li><li>mean: 4.9 tokens</li><li>max: 9 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.82 tokens</li><li>max: 44 tokens</li></ul> |
|
| | * Samples:
|
| | | word | definition |
|
| | |:-------------------------|:--------------------------------------------------------|
|
| | | <code>SLURPIEST</code> | <code>< SLURPY, making a slurping noise [adj]</code> |
|
| | | <code>CRISPNESSES</code> | <code>< CRISPNESS, < CRISP, fresh and firm [adj]</code> |
|
| | | <code>CECUTIENCY</code> | <code>a tendency to blindness [n]</code> |
|
| | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
|
| | ```json
|
| | {
|
| | "loss": "MultipleNegativesRankingLoss",
|
| | "matryoshka_dims": [
|
| | 384,
|
| | 256
|
| | ],
|
| | "matryoshka_weights": [
|
| | 1,
|
| | 1
|
| | ],
|
| | "n_dims_per_step": -1
|
| | }
|
| | ```
|
| |
|
| | ### Training Hyperparameters
|
| | #### Non-Default Hyperparameters
|
| |
|
| | - `eval_strategy`: steps
|
| | - `per_device_train_batch_size`: 64
|
| | - `learning_rate`: 2e-05
|
| | - `num_train_epochs`: 1
|
| | - `fp16`: True
|
| |
|
| | #### All Hyperparameters
|
| | <details><summary>Click to expand</summary>
|
| |
|
| | - `overwrite_output_dir`: False
|
| | - `do_predict`: False
|
| | - `eval_strategy`: steps
|
| | - `prediction_loss_only`: True
|
| | - `per_device_train_batch_size`: 64
|
| | - `per_device_eval_batch_size`: 8
|
| | - `per_gpu_train_batch_size`: None
|
| | - `per_gpu_eval_batch_size`: None
|
| | - `gradient_accumulation_steps`: 1
|
| | - `eval_accumulation_steps`: None
|
| | - `torch_empty_cache_steps`: None
|
| | - `learning_rate`: 2e-05
|
| | - `weight_decay`: 0.0
|
| | - `adam_beta1`: 0.9
|
| | - `adam_beta2`: 0.999
|
| | - `adam_epsilon`: 1e-08
|
| | - `max_grad_norm`: 1.0
|
| | - `num_train_epochs`: 1
|
| | - `max_steps`: -1
|
| | - `lr_scheduler_type`: linear
|
| | - `lr_scheduler_kwargs`: {}
|
| | - `warmup_ratio`: 0.0
|
| | - `warmup_steps`: 0
|
| | - `log_level`: passive
|
| | - `log_level_replica`: warning
|
| | - `log_on_each_node`: True
|
| | - `logging_nan_inf_filter`: True
|
| | - `save_safetensors`: True
|
| | - `save_on_each_node`: False
|
| | - `save_only_model`: False
|
| | - `restore_callback_states_from_checkpoint`: False
|
| | - `no_cuda`: False
|
| | - `use_cpu`: False
|
| | - `use_mps_device`: False
|
| | - `seed`: 42
|
| | - `data_seed`: None
|
| | - `jit_mode_eval`: False
|
| | - `bf16`: False
|
| | - `fp16`: True
|
| | - `fp16_opt_level`: O1
|
| | - `half_precision_backend`: auto
|
| | - `bf16_full_eval`: False
|
| | - `fp16_full_eval`: False
|
| | - `tf32`: None
|
| | - `local_rank`: 0
|
| | - `ddp_backend`: None
|
| | - `tpu_num_cores`: None
|
| | - `tpu_metrics_debug`: False
|
| | - `debug`: []
|
| | - `dataloader_drop_last`: False
|
| | - `dataloader_num_workers`: 0
|
| | - `dataloader_prefetch_factor`: None
|
| | - `past_index`: -1
|
| | - `disable_tqdm`: False
|
| | - `remove_unused_columns`: True
|
| | - `label_names`: None
|
| | - `load_best_model_at_end`: False
|
| | - `ignore_data_skip`: False
|
| | - `fsdp`: []
|
| | - `fsdp_min_num_params`: 0
|
| | - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
|
| | - `fsdp_transformer_layer_cls_to_wrap`: None
|
| | - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
|
| | - `parallelism_config`: None
|
| | - `deepspeed`: None
|
| | - `label_smoothing_factor`: 0.0
|
| | - `optim`: adamw_torch_fused
|
| | - `optim_args`: None
|
| | - `adafactor`: False
|
| | - `group_by_length`: False
|
| | - `length_column_name`: length
|
| | - `project`: huggingface
|
| | - `trackio_space_id`: trackio
|
| | - `ddp_find_unused_parameters`: None
|
| | - `ddp_bucket_cap_mb`: None
|
| | - `ddp_broadcast_buffers`: False
|
| | - `dataloader_pin_memory`: True
|
| | - `dataloader_persistent_workers`: False
|
| | - `skip_memory_metrics`: True
|
| | - `use_legacy_prediction_loop`: False
|
| | - `push_to_hub`: False
|
| | - `resume_from_checkpoint`: None
|
| | - `hub_model_id`: None
|
| | - `hub_strategy`: every_save
|
| | - `hub_private_repo`: None
|
| | - `hub_always_push`: False
|
| | - `hub_revision`: None
|
| | - `gradient_checkpointing`: False
|
| | - `gradient_checkpointing_kwargs`: None
|
| | - `include_inputs_for_metrics`: False
|
| | - `include_for_metrics`: []
|
| | - `eval_do_concat_batches`: True
|
| | - `fp16_backend`: auto
|
| | - `push_to_hub_model_id`: None
|
| | - `push_to_hub_organization`: None
|
| | - `mp_parameters`:
|
| | - `auto_find_batch_size`: False
|
| | - `full_determinism`: False
|
| | - `torchdynamo`: None
|
| | - `ray_scope`: last
|
| | - `ddp_timeout`: 1800
|
| | - `torch_compile`: False
|
| | - `torch_compile_backend`: None
|
| | - `torch_compile_mode`: None
|
| | - `include_tokens_per_second`: False
|
| | - `include_num_input_tokens_seen`: no
|
| | - `neftune_noise_alpha`: None
|
| | - `optim_target_modules`: None
|
| | - `batch_eval_metrics`: False
|
| | - `eval_on_start`: False
|
| | - `use_liger_kernel`: False
|
| | - `liger_kernel_config`: None
|
| | - `eval_use_gather_object`: False
|
| | - `average_tokens_across_devices`: True
|
| | - `prompts`: None
|
| | - `batch_sampler`: batch_sampler
|
| | - `multi_dataset_batch_sampler`: proportional
|
| | - `router_mapping`: {}
|
| | - `learning_rate_mapping`: {}
|
| |
|
| | </details>
|
| |
|
| | ### Training Logs
|
| | | Epoch | Step | Training Loss | dictionary-test_cosine_ndcg@10 |
|
| | |:------:|:----:|:-------------:|:------------------------------:|
|
| | | 0.0281 | 100 | 1.5353 | 0.6306 |
|
| | | 0.0563 | 200 | 1.2836 | 0.6543 |
|
| | | 0.0844 | 300 | 1.2305 | 0.6637 |
|
| | | 0.1125 | 400 | 1.1669 | 0.6651 |
|
| | | 0.1406 | 500 | 1.1904 | 0.6714 |
|
| | | 0.1688 | 600 | 1.0998 | 0.6738 |
|
| | | 0.1969 | 700 | 1.0655 | 0.6751 |
|
| | | 0.2250 | 800 | 1.095 | 0.6781 |
|
| | | 0.2532 | 900 | 1.1535 | 0.6813 |
|
| | | 0.2813 | 1000 | 1.0047 | 0.6814 |
|
| | | 0.3094 | 1100 | 1.0749 | 0.6809 |
|
| | | 0.3376 | 1200 | 1.0642 | 0.6813 |
|
| | | 0.3657 | 1300 | 1.0718 | 0.6851 |
|
| | | 0.3938 | 1400 | 1.023 | 0.6854 |
|
| | | 0.4219 | 1500 | 1.0429 | 0.6850 |
|
| | | 0.4501 | 1600 | 1.0088 | 0.6849 |
|
| | | 0.4782 | 1700 | 1.0129 | 0.6873 |
|
| | | 0.5063 | 1800 | 0.988 | 0.6874 |
|
| | | 0.5345 | 1900 | 1.0413 | 0.6882 |
|
| | | 0.5626 | 2000 | 1.0043 | 0.6885 |
|
| | | 0.5907 | 2100 | 0.9929 | 0.6886 |
|
| | | 0.6188 | 2200 | 0.9403 | 0.6899 |
|
| | | 0.6470 | 2300 | 0.9789 | 0.6907 |
|
| | | 0.6751 | 2400 | 0.9595 | 0.6912 |
|
| | | 0.7032 | 2500 | 0.9786 | 0.6914 |
|
| | | 0.7314 | 2600 | 0.9647 | 0.6911 |
|
| | | 0.7595 | 2700 | 0.9245 | 0.6897 |
|
| | | 0.7876 | 2800 | 0.9685 | 0.6906 |
|
| | | 0.8158 | 2900 | 0.9778 | 0.6896 |
|
| | | 0.8439 | 3000 | 0.939 | 0.6906 |
|
| | | 0.8720 | 3100 | 0.9822 | 0.6904 |
|
| | | 0.9001 | 3200 | 1.0038 | 0.6913 |
|
| | | 0.9283 | 3300 | 0.9297 | 0.6910 |
|
| | | 0.9564 | 3400 | 0.9215 | 0.6915 |
|
| | | 0.9845 | 3500 | 0.948 | 0.6919 |
|
| |
|
| |
|
| | ### Framework Versions
|
| | - Python: 3.11.4
|
| | - Sentence Transformers: 5.1.2
|
| | - Transformers: 4.57.3
|
| | - PyTorch: 2.9.1+cpu
|
| | - Accelerate: 1.12.0
|
| | - Datasets: 4.4.1
|
| | - Tokenizers: 0.22.1
|
| |
|
| | ## Citation
|
| |
|
| | ### BibTeX
|
| |
|
| | #### Sentence Transformers
|
| | ```bibtex
|
| | @inproceedings{reimers-2019-sentence-bert,
|
| | title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
| | author = "Reimers, Nils and Gurevych, Iryna",
|
| | booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
| | month = "11",
|
| | year = "2019",
|
| | publisher = "Association for Computational Linguistics",
|
| | url = "https://arxiv.org/abs/1908.10084",
|
| | }
|
| | ```
|
| |
|
| | #### MatryoshkaLoss
|
| | ```bibtex
|
| | @misc{kusupati2024matryoshka,
|
| | title={Matryoshka Representation Learning},
|
| | author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
|
| | year={2024},
|
| | eprint={2205.13147},
|
| | archivePrefix={arXiv},
|
| | primaryClass={cs.LG}
|
| | }
|
| | ```
|
| |
|
| | #### MultipleNegativesRankingLoss
|
| | ```bibtex
|
| | @misc{henderson2017efficient,
|
| | title={Efficient Natural Language Response Suggestion for Smart Reply},
|
| | author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
|
| | year={2017},
|
| | eprint={1705.00652},
|
| | archivePrefix={arXiv},
|
| | primaryClass={cs.CL}
|
| | }
|
| | ```
|
| |
|
| | <!--
|
| | ## Glossary
|
| |
|
| | *Clearly define terms in order to be accessible across audiences.*
|
| | -->
|
| |
|
| | <!--
|
| | ## Model Card Authors
|
| |
|
| | *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
| | -->
|
| |
|
| | <!--
|
| | ## Model Card Contact
|
| |
|
| | *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
| | --> |