| | --- |
| | base_model: intfloat/multilingual-e5-small |
| | datasets: [] |
| | language: [] |
| | library_name: sentence-transformers |
| | metrics: |
| | - cosine_accuracy |
| | - cosine_accuracy_threshold |
| | - cosine_f1 |
| | - cosine_f1_threshold |
| | - cosine_precision |
| | - cosine_recall |
| | - cosine_ap |
| | - dot_accuracy |
| | - dot_accuracy_threshold |
| | - dot_f1 |
| | - dot_f1_threshold |
| | - dot_precision |
| | - dot_recall |
| | - dot_ap |
| | - manhattan_accuracy |
| | - manhattan_accuracy_threshold |
| | - manhattan_f1 |
| | - manhattan_f1_threshold |
| | - manhattan_precision |
| | - manhattan_recall |
| | - manhattan_ap |
| | - euclidean_accuracy |
| | - euclidean_accuracy_threshold |
| | - euclidean_f1 |
| | - euclidean_f1_threshold |
| | - euclidean_precision |
| | - euclidean_recall |
| | - euclidean_ap |
| | - max_accuracy |
| | - max_accuracy_threshold |
| | - max_f1 |
| | - max_f1_threshold |
| | - max_precision |
| | - max_recall |
| | - max_ap |
| | pipeline_tag: sentence-similarity |
| | tags: |
| | - sentence-transformers |
| | - sentence-similarity |
| | - feature-extraction |
| | - generated_from_trainer |
| | - dataset_size:916 |
| | - loss:OnlineContrastiveLoss |
| | widget: |
| | - source_sentence: Ways to enhance memory retention |
| | sentences: |
| | - How to improve memory? |
| | - What is the currency of China? |
| | - How do I replace a flat tire? |
| | - source_sentence: Why it's essential to maintain a balanced diet |
| | sentences: |
| | - What is the importance of a balanced diet? |
| | - What is the population of Canada? |
| | - How to create a website from scratch? |
| | - source_sentence: What is the chemical formula for methanol? |
| | sentences: |
| | - What is the highest mountain in North America? |
| | - What are the advantages of electric cars over gasoline vehicles? |
| | - What is the chemical formula for ethanol? |
| | - source_sentence: How is photosynthesis carried out? |
| | sentences: |
| | - What is the currency of the United States? |
| | - What is the capital of Norway? |
| | - How does photosynthesis work? |
| | - source_sentence: How is the weather today? |
| | sentences: |
| | - Who invented the airplane? |
| | - What is the weather like today? |
| | - Who was the first female Prime Minister of the UK? |
| | model-index: |
| | - name: SentenceTransformer based on intfloat/multilingual-e5-small |
| | results: |
| | - task: |
| | type: binary-classification |
| | name: Binary Classification |
| | dataset: |
| | name: pair class dev |
| | type: pair-class-dev |
| | metrics: |
| | - type: cosine_accuracy |
| | value: 0.9388646288209607 |
| | name: Cosine Accuracy |
| | - type: cosine_accuracy_threshold |
| | value: 0.7886800765991211 |
| | name: Cosine Accuracy Threshold |
| | - type: cosine_f1 |
| | value: 0.9411764705882353 |
| | name: Cosine F1 |
| | - type: cosine_f1_threshold |
| | value: 0.7886800765991211 |
| | name: Cosine F1 Threshold |
| | - type: cosine_precision |
| | value: 0.9572649572649573 |
| | name: Cosine Precision |
| | - type: cosine_recall |
| | value: 0.9256198347107438 |
| | name: Cosine Recall |
| | - type: cosine_ap |
| | value: 0.973954773457217 |
| | name: Cosine Ap |
| | - type: dot_accuracy |
| | value: 0.9388646288209607 |
| | name: Dot Accuracy |
| | - type: dot_accuracy_threshold |
| | value: 0.7886800765991211 |
| | name: Dot Accuracy Threshold |
| | - type: dot_f1 |
| | value: 0.9411764705882353 |
| | name: Dot F1 |
| | - type: dot_f1_threshold |
| | value: 0.7886800765991211 |
| | name: Dot F1 Threshold |
| | - type: dot_precision |
| | value: 0.9572649572649573 |
| | name: Dot Precision |
| | - type: dot_recall |
| | value: 0.9256198347107438 |
| | name: Dot Recall |
| | - type: dot_ap |
| | value: 0.973954773457217 |
| | name: Dot Ap |
| | - type: manhattan_accuracy |
| | value: 0.9388646288209607 |
| | name: Manhattan Accuracy |
| | - type: manhattan_accuracy_threshold |
| | value: 10.132380485534668 |
| | name: Manhattan Accuracy Threshold |
| | - type: manhattan_f1 |
| | value: 0.9411764705882353 |
| | name: Manhattan F1 |
| | - type: manhattan_f1_threshold |
| | value: 10.132380485534668 |
| | name: Manhattan F1 Threshold |
| | - type: manhattan_precision |
| | value: 0.9572649572649573 |
| | name: Manhattan Precision |
| | - type: manhattan_recall |
| | value: 0.9256198347107438 |
| | name: Manhattan Recall |
| | - type: manhattan_ap |
| | value: 0.9728889947842537 |
| | name: Manhattan Ap |
| | - type: euclidean_accuracy |
| | value: 0.9388646288209607 |
| | name: Euclidean Accuracy |
| | - type: euclidean_accuracy_threshold |
| | value: 0.6500871777534485 |
| | name: Euclidean Accuracy Threshold |
| | - type: euclidean_f1 |
| | value: 0.9411764705882353 |
| | name: Euclidean F1 |
| | - type: euclidean_f1_threshold |
| | value: 0.6500871777534485 |
| | name: Euclidean F1 Threshold |
| | - type: euclidean_precision |
| | value: 0.9572649572649573 |
| | name: Euclidean Precision |
| | - type: euclidean_recall |
| | value: 0.9256198347107438 |
| | name: Euclidean Recall |
| | - type: euclidean_ap |
| | value: 0.973954773457217 |
| | name: Euclidean Ap |
| | - type: max_accuracy |
| | value: 0.9388646288209607 |
| | name: Max Accuracy |
| | - type: max_accuracy_threshold |
| | value: 10.132380485534668 |
| | name: Max Accuracy Threshold |
| | - type: max_f1 |
| | value: 0.9411764705882353 |
| | name: Max F1 |
| | - type: max_f1_threshold |
| | value: 10.132380485534668 |
| | name: Max F1 Threshold |
| | - type: max_precision |
| | value: 0.9572649572649573 |
| | name: Max Precision |
| | - type: max_recall |
| | value: 0.9256198347107438 |
| | name: Max Recall |
| | - type: max_ap |
| | value: 0.973954773457217 |
| | name: Max Ap |
| | - task: |
| | type: binary-classification |
| | name: Binary Classification |
| | dataset: |
| | name: pair class test |
| | type: pair-class-test |
| | metrics: |
| | - type: cosine_accuracy |
| | value: 0.9388646288209607 |
| | name: Cosine Accuracy |
| | - type: cosine_accuracy_threshold |
| | value: 0.8207830190658569 |
| | name: Cosine Accuracy Threshold |
| | - type: cosine_f1 |
| | value: 0.9421487603305785 |
| | name: Cosine F1 |
| | - type: cosine_f1_threshold |
| | value: 0.8207830190658569 |
| | name: Cosine F1 Threshold |
| | - type: cosine_precision |
| | value: 0.9421487603305785 |
| | name: Cosine Precision |
| | - type: cosine_recall |
| | value: 0.9421487603305785 |
| | name: Cosine Recall |
| | - type: cosine_ap |
| | value: 0.9731728800864022 |
| | name: Cosine Ap |
| | - type: dot_accuracy |
| | value: 0.9388646288209607 |
| | name: Dot Accuracy |
| | - type: dot_accuracy_threshold |
| | value: 0.8207829594612122 |
| | name: Dot Accuracy Threshold |
| | - type: dot_f1 |
| | value: 0.9421487603305785 |
| | name: Dot F1 |
| | - type: dot_f1_threshold |
| | value: 0.8207829594612122 |
| | name: Dot F1 Threshold |
| | - type: dot_precision |
| | value: 0.9421487603305785 |
| | name: Dot Precision |
| | - type: dot_recall |
| | value: 0.9421487603305785 |
| | name: Dot Recall |
| | - type: dot_ap |
| | value: 0.9731728800864022 |
| | name: Dot Ap |
| | - type: manhattan_accuracy |
| | value: 0.9344978165938864 |
| | name: Manhattan Accuracy |
| | - type: manhattan_accuracy_threshold |
| | value: 9.387104988098145 |
| | name: Manhattan Accuracy Threshold |
| | - type: manhattan_f1 |
| | value: 0.9382716049382717 |
| | name: Manhattan F1 |
| | - type: manhattan_f1_threshold |
| | value: 9.516077041625977 |
| | name: Manhattan F1 Threshold |
| | - type: manhattan_precision |
| | value: 0.9344262295081968 |
| | name: Manhattan Precision |
| | - type: manhattan_recall |
| | value: 0.9421487603305785 |
| | name: Manhattan Recall |
| | - type: manhattan_ap |
| | value: 0.9720713665843098 |
| | name: Manhattan Ap |
| | - type: euclidean_accuracy |
| | value: 0.9388646288209607 |
| | name: Euclidean Accuracy |
| | - type: euclidean_accuracy_threshold |
| | value: 0.5986893177032471 |
| | name: Euclidean Accuracy Threshold |
| | - type: euclidean_f1 |
| | value: 0.9421487603305785 |
| | name: Euclidean F1 |
| | - type: euclidean_f1_threshold |
| | value: 0.5986893177032471 |
| | name: Euclidean F1 Threshold |
| | - type: euclidean_precision |
| | value: 0.9421487603305785 |
| | name: Euclidean Precision |
| | - type: euclidean_recall |
| | value: 0.9421487603305785 |
| | name: Euclidean Recall |
| | - type: euclidean_ap |
| | value: 0.9731728800864022 |
| | name: Euclidean Ap |
| | - type: max_accuracy |
| | value: 0.9388646288209607 |
| | name: Max Accuracy |
| | - type: max_accuracy_threshold |
| | value: 9.387104988098145 |
| | name: Max Accuracy Threshold |
| | - type: max_f1 |
| | value: 0.9421487603305785 |
| | name: Max F1 |
| | - type: max_f1_threshold |
| | value: 9.516077041625977 |
| | name: Max F1 Threshold |
| | - type: max_precision |
| | value: 0.9421487603305785 |
| | name: Max Precision |
| | - type: max_recall |
| | value: 0.9421487603305785 |
| | name: Max Recall |
| | - type: max_ap |
| | value: 0.9731728800864022 |
| | name: Max Ap |
| | --- |
| | |
| | # SentenceTransformer based on intfloat/multilingual-e5-small |
| |
|
| | This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
| |
|
| | ## Model Details |
| |
|
| | ### Model Description |
| | - **Model Type:** Sentence Transformer |
| | - **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 --> |
| | - **Maximum Sequence Length:** 512 tokens |
| | - **Output Dimensionality:** 384 tokens |
| | - **Similarity Function:** Cosine Similarity |
| | <!-- - **Training Dataset:** Unknown --> |
| | <!-- - **Language:** Unknown --> |
| | <!-- - **License:** Unknown --> |
| |
|
| | ### Model Sources |
| |
|
| | - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| | - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
| | - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
| |
|
| | ### Full Model Architecture |
| |
|
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel |
| | (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
| | (2): Normalize() |
| | ) |
| | ``` |
| |
|
| | ## Usage |
| |
|
| | ### Direct Usage (Sentence Transformers) |
| |
|
| | First install the Sentence Transformers library: |
| |
|
| | ```bash |
| | pip install -U sentence-transformers |
| | ``` |
| |
|
| | Then you can load this model and run inference. |
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | |
| | # Download from the 🤗 Hub |
| | model = SentenceTransformer("srikarvar/fine_tuned_model_1") |
| | # Run inference |
| | sentences = [ |
| | 'How is the weather today?', |
| | 'What is the weather like today?', |
| | 'Who was the first female Prime Minister of the UK?', |
| | ] |
| | embeddings = model.encode(sentences) |
| | print(embeddings.shape) |
| | # [3, 384] |
| | |
| | # Get the similarity scores for the embeddings |
| | similarities = model.similarity(embeddings, embeddings) |
| | print(similarities.shape) |
| | # [3, 3] |
| | ``` |
| |
|
| | <!-- |
| | ### Direct Usage (Transformers) |
| |
|
| | <details><summary>Click to see the direct usage in Transformers</summary> |
| |
|
| | </details> |
| | --> |
| |
|
| | <!-- |
| | ### Downstream Usage (Sentence Transformers) |
| |
|
| | You can finetune this model on your own dataset. |
| |
|
| | <details><summary>Click to expand</summary> |
| |
|
| | </details> |
| | --> |
| |
|
| | <!-- |
| | ### Out-of-Scope Use |
| |
|
| | *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| | --> |
| |
|
| | ## Evaluation |
| |
|
| | ### Metrics |
| |
|
| | #### Binary Classification |
| | * Dataset: `pair-class-dev` |
| | * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) |
| |
|
| | | Metric | Value | |
| | |:-----------------------------|:----------| |
| | | cosine_accuracy | 0.9389 | |
| | | cosine_accuracy_threshold | 0.7887 | |
| | | cosine_f1 | 0.9412 | |
| | | cosine_f1_threshold | 0.7887 | |
| | | cosine_precision | 0.9573 | |
| | | cosine_recall | 0.9256 | |
| | | cosine_ap | 0.974 | |
| | | dot_accuracy | 0.9389 | |
| | | dot_accuracy_threshold | 0.7887 | |
| | | dot_f1 | 0.9412 | |
| | | dot_f1_threshold | 0.7887 | |
| | | dot_precision | 0.9573 | |
| | | dot_recall | 0.9256 | |
| | | dot_ap | 0.974 | |
| | | manhattan_accuracy | 0.9389 | |
| | | manhattan_accuracy_threshold | 10.1324 | |
| | | manhattan_f1 | 0.9412 | |
| | | manhattan_f1_threshold | 10.1324 | |
| | | manhattan_precision | 0.9573 | |
| | | manhattan_recall | 0.9256 | |
| | | manhattan_ap | 0.9729 | |
| | | euclidean_accuracy | 0.9389 | |
| | | euclidean_accuracy_threshold | 0.6501 | |
| | | euclidean_f1 | 0.9412 | |
| | | euclidean_f1_threshold | 0.6501 | |
| | | euclidean_precision | 0.9573 | |
| | | euclidean_recall | 0.9256 | |
| | | euclidean_ap | 0.974 | |
| | | max_accuracy | 0.9389 | |
| | | max_accuracy_threshold | 10.1324 | |
| | | max_f1 | 0.9412 | |
| | | max_f1_threshold | 10.1324 | |
| | | max_precision | 0.9573 | |
| | | max_recall | 0.9256 | |
| | | **max_ap** | **0.974** | |
| | |
| | #### Binary Classification |
| | * Dataset: `pair-class-test` |
| | * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) |
| | |
| | | Metric | Value | |
| | |:-----------------------------|:-----------| |
| | | cosine_accuracy | 0.9389 | |
| | | cosine_accuracy_threshold | 0.8208 | |
| | | cosine_f1 | 0.9421 | |
| | | cosine_f1_threshold | 0.8208 | |
| | | cosine_precision | 0.9421 | |
| | | cosine_recall | 0.9421 | |
| | | cosine_ap | 0.9732 | |
| | | dot_accuracy | 0.9389 | |
| | | dot_accuracy_threshold | 0.8208 | |
| | | dot_f1 | 0.9421 | |
| | | dot_f1_threshold | 0.8208 | |
| | | dot_precision | 0.9421 | |
| | | dot_recall | 0.9421 | |
| | | dot_ap | 0.9732 | |
| | | manhattan_accuracy | 0.9345 | |
| | | manhattan_accuracy_threshold | 9.3871 | |
| | | manhattan_f1 | 0.9383 | |
| | | manhattan_f1_threshold | 9.5161 | |
| | | manhattan_precision | 0.9344 | |
| | | manhattan_recall | 0.9421 | |
| | | manhattan_ap | 0.9721 | |
| | | euclidean_accuracy | 0.9389 | |
| | | euclidean_accuracy_threshold | 0.5987 | |
| | | euclidean_f1 | 0.9421 | |
| | | euclidean_f1_threshold | 0.5987 | |
| | | euclidean_precision | 0.9421 | |
| | | euclidean_recall | 0.9421 | |
| | | euclidean_ap | 0.9732 | |
| | | max_accuracy | 0.9389 | |
| | | max_accuracy_threshold | 9.3871 | |
| | | max_f1 | 0.9421 | |
| | | max_f1_threshold | 9.5161 | |
| | | max_precision | 0.9421 | |
| | | max_recall | 0.9421 | |
| | | **max_ap** | **0.9732** | |
| |
|
| | <!-- |
| | ## Bias, Risks and Limitations |
| |
|
| | *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| | --> |
| |
|
| | <!-- |
| | ### Recommendations |
| |
|
| | *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| | --> |
| |
|
| | ## Training Details |
| |
|
| | ### Training Dataset |
| |
|
| | #### Unnamed Dataset |
| |
|
| |
|
| | * Size: 916 training samples |
| | * Columns: <code>label</code>, <code>sentence2</code>, and <code>sentence1</code> |
| | * Approximate statistics based on the first 1000 samples: |
| | | | label | sentence2 | sentence1 | |
| | |:--------|:------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| |
| | | type | int | string | string | |
| | | details | <ul><li>0: ~49.56%</li><li>1: ~50.44%</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.32 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.92 tokens</li><li>max: 22 tokens</li></ul> | |
| | * Samples: |
| | | label | sentence2 | sentence1 | |
| | |:---------------|:---------------------------------------------------------------------|:-----------------------------------------------------| |
| | | <code>1</code> | <code>What are the potential side effects of this medication?</code> | <code>What are the side effects of this drug?</code> | |
| | | <code>0</code> | <code>How to fix a torn pocket?</code> | <code>How to fix a broken zipper?</code> | |
| | | <code>0</code> | <code>How to make a chocolate chip cookie dough?</code> | <code>How to bake a chocolate chip cookie?</code> | |
| | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) |
| |
|
| | ### Evaluation Dataset |
| |
|
| | #### Unnamed Dataset |
| |
|
| |
|
| | * Size: 229 evaluation samples |
| | * Columns: <code>label</code>, <code>sentence2</code>, and <code>sentence1</code> |
| | * Approximate statistics based on the first 1000 samples: |
| | | | label | sentence2 | sentence1 | |
| | |:--------|:------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| |
| | | type | int | string | string | |
| | | details | <ul><li>0: ~47.16%</li><li>1: ~52.84%</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.95 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.81 tokens</li><li>max: 20 tokens</li></ul> | |
| | * Samples: |
| | | label | sentence2 | sentence1 | |
| | |:---------------|:--------------------------------------------------------------|:---------------------------------------------------| |
| | | <code>0</code> | <code>What methods are used to measure a nation's GDP?</code> | <code>How is the GDP of a country measured?</code> | |
| | | <code>0</code> | <code>What is the currency of Japan?</code> | <code>What is the currency of China?</code> | |
| | | <code>1</code> | <code>Steps to cultivate tomatoes at home</code> | <code>How to grow tomatoes in a garden?</code> | |
| | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) |
| |
|
| | ### Training Hyperparameters |
| | #### Non-Default Hyperparameters |
| |
|
| | - `eval_strategy`: epoch |
| | - `per_device_train_batch_size`: 32 |
| | - `per_device_eval_batch_size`: 32 |
| | - `gradient_accumulation_steps`: 2 |
| | - `weight_decay`: 0.01 |
| | - `num_train_epochs`: 8 |
| | - `warmup_ratio`: 0.1 |
| | - `load_best_model_at_end`: True |
| | - `optim`: adamw_torch_fused |
| | - `batch_sampler`: no_duplicates |
| | |
| | #### All Hyperparameters |
| | <details><summary>Click to expand</summary> |
| | |
| | - `overwrite_output_dir`: False |
| | - `do_predict`: False |
| | - `eval_strategy`: epoch |
| | - `prediction_loss_only`: True |
| | - `per_device_train_batch_size`: 32 |
| | - `per_device_eval_batch_size`: 32 |
| | - `per_gpu_train_batch_size`: None |
| | - `per_gpu_eval_batch_size`: None |
| | - `gradient_accumulation_steps`: 2 |
| | - `eval_accumulation_steps`: None |
| | - `learning_rate`: 5e-05 |
| | - `weight_decay`: 0.01 |
| | - `adam_beta1`: 0.9 |
| | - `adam_beta2`: 0.999 |
| | - `adam_epsilon`: 1e-08 |
| | - `max_grad_norm`: 1.0 |
| | - `num_train_epochs`: 8 |
| | - `max_steps`: -1 |
| | - `lr_scheduler_type`: linear |
| | - `lr_scheduler_kwargs`: {} |
| | - `warmup_ratio`: 0.1 |
| | - `warmup_steps`: 0 |
| | - `log_level`: passive |
| | - `log_level_replica`: warning |
| | - `log_on_each_node`: True |
| | - `logging_nan_inf_filter`: True |
| | - `save_safetensors`: True |
| | - `save_on_each_node`: False |
| | - `save_only_model`: False |
| | - `restore_callback_states_from_checkpoint`: False |
| | - `no_cuda`: False |
| | - `use_cpu`: False |
| | - `use_mps_device`: False |
| | - `seed`: 42 |
| | - `data_seed`: None |
| | - `jit_mode_eval`: False |
| | - `use_ipex`: False |
| | - `bf16`: False |
| | - `fp16`: False |
| | - `fp16_opt_level`: O1 |
| | - `half_precision_backend`: auto |
| | - `bf16_full_eval`: False |
| | - `fp16_full_eval`: False |
| | - `tf32`: None |
| | - `local_rank`: 0 |
| | - `ddp_backend`: None |
| | - `tpu_num_cores`: None |
| | - `tpu_metrics_debug`: False |
| | - `debug`: [] |
| | - `dataloader_drop_last`: False |
| | - `dataloader_num_workers`: 0 |
| | - `dataloader_prefetch_factor`: None |
| | - `past_index`: -1 |
| | - `disable_tqdm`: False |
| | - `remove_unused_columns`: True |
| | - `label_names`: None |
| | - `load_best_model_at_end`: True |
| | - `ignore_data_skip`: False |
| | - `fsdp`: [] |
| | - `fsdp_min_num_params`: 0 |
| | - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
| | - `fsdp_transformer_layer_cls_to_wrap`: None |
| | - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
| | - `deepspeed`: None |
| | - `label_smoothing_factor`: 0.0 |
| | - `optim`: adamw_torch_fused |
| | - `optim_args`: None |
| | - `adafactor`: False |
| | - `group_by_length`: False |
| | - `length_column_name`: length |
| | - `ddp_find_unused_parameters`: None |
| | - `ddp_bucket_cap_mb`: None |
| | - `ddp_broadcast_buffers`: False |
| | - `dataloader_pin_memory`: True |
| | - `dataloader_persistent_workers`: False |
| | - `skip_memory_metrics`: True |
| | - `use_legacy_prediction_loop`: False |
| | - `push_to_hub`: False |
| | - `resume_from_checkpoint`: None |
| | - `hub_model_id`: None |
| | - `hub_strategy`: every_save |
| | - `hub_private_repo`: False |
| | - `hub_always_push`: False |
| | - `gradient_checkpointing`: False |
| | - `gradient_checkpointing_kwargs`: None |
| | - `include_inputs_for_metrics`: False |
| | - `eval_do_concat_batches`: True |
| | - `fp16_backend`: auto |
| | - `push_to_hub_model_id`: None |
| | - `push_to_hub_organization`: None |
| | - `mp_parameters`: |
| | - `auto_find_batch_size`: False |
| | - `full_determinism`: False |
| | - `torchdynamo`: None |
| | - `ray_scope`: last |
| | - `ddp_timeout`: 1800 |
| | - `torch_compile`: False |
| | - `torch_compile_backend`: None |
| | - `torch_compile_mode`: None |
| | - `dispatch_batches`: None |
| | - `split_batches`: None |
| | - `include_tokens_per_second`: False |
| | - `include_num_input_tokens_seen`: False |
| | - `neftune_noise_alpha`: None |
| | - `optim_target_modules`: None |
| | - `batch_eval_metrics`: False |
| | - `batch_sampler`: no_duplicates |
| | - `multi_dataset_batch_sampler`: proportional |
| |
|
| | </details> |
| |
|
| | ### Training Logs |
| | | Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap | |
| | |:----------:|:------:|:-------------:|:----------:|:---------------------:|:----------------------:| |
| | | 0 | 0 | - | - | 0.7130 | - | |
| | | 0.6897 | 10 | 3.0972 | - | - | - | |
| | | 1.0345 | 15 | - | 0.8033 | 0.9272 | - | |
| | | 1.3448 | 20 | 1.0451 | - | - | - | |
| | | 2.0345 | 30 | 0.5786 | 0.4910 | 0.9680 | - | |
| | | 2.6897 | 40 | 0.2996 | - | - | - | |
| | | 3.0345 | 45 | - | 0.4487 | 0.9731 | - | |
| | | 3.3448 | 50 | 0.0901 | - | - | - | |
| | | **4.0345** | **60** | **0.067** | **0.4115** | **0.9732** | **-** | |
| | | 4.6897 | 70 | 0.0729 | - | - | - | |
| | | 5.0345 | 75 | - | 0.4543 | 0.9727 | - | |
| | | 5.3448 | 80 | 0.0453 | - | - | - | |
| | | 6.0345 | 90 | 0.0637 | 0.4249 | 0.9736 | - | |
| | | 6.6897 | 100 | 0.0388 | - | - | - | |
| | | 7.0345 | 105 | - | 0.4223 | 0.9740 | - | |
| | | 7.3448 | 110 | 0.0382 | - | - | - | |
| | | 7.4828 | 112 | - | 0.4226 | 0.9740 | 0.9732 | |
| |
|
| | * The bold row denotes the saved checkpoint. |
| |
|
| | ### Framework Versions |
| | - Python: 3.10.12 |
| | - Sentence Transformers: 3.0.1 |
| | - Transformers: 4.41.2 |
| | - PyTorch: 2.1.2+cu121 |
| | - Accelerate: 0.32.1 |
| | - Datasets: 2.19.1 |
| | - Tokenizers: 0.19.1 |
| |
|
| | ## Citation |
| |
|
| | ### BibTeX |
| |
|
| | #### Sentence Transformers |
| | ```bibtex |
| | @inproceedings{reimers-2019-sentence-bert, |
| | title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
| | author = "Reimers, Nils and Gurevych, Iryna", |
| | booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
| | month = "11", |
| | year = "2019", |
| | publisher = "Association for Computational Linguistics", |
| | url = "https://arxiv.org/abs/1908.10084", |
| | } |
| | ``` |
| |
|
| | <!-- |
| | ## Glossary |
| |
|
| | *Clearly define terms in order to be accessible across audiences.* |
| | --> |
| |
|
| | <!-- |
| | ## Model Card Authors |
| |
|
| | *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| | --> |
| |
|
| | <!-- |
| | ## Model Card Contact |
| |
|
| | *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| | --> |