|
|
---
|
|
|
tags:
|
|
|
- sentence-transformers
|
|
|
- sentence-similarity
|
|
|
- feature-extraction
|
|
|
- generated_from_trainer
|
|
|
- dataset_size:20000
|
|
|
- loss:CosineSimilarityLoss
|
|
|
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
|
|
widget:
|
|
|
- source_sentence: 'Question: Is this describing a (1) directly correlative relationship,
|
|
|
(2) conditionally causative relationship, (3) causative relationship, or (0) no
|
|
|
relationship.'
|
|
|
sentences:
|
|
|
- 'C: Iron deficiency anemia in the mother; normal Hb levels in the fetus'
|
|
|
- This is a conditionally causative relationship
|
|
|
- 'C: Decreasing carbohydrate intake, increasing fat intake'
|
|
|
- source_sentence: Please summerize the given abstract to a title
|
|
|
sentences:
|
|
|
- 'BatteryLab: A Collaborative Platform for Power Monitoring'
|
|
|
- hi ! good evening. i am chatbot answering your query. from the history, it seems
|
|
|
that you might have sustained some kind of trivial trauma while cutting woods
|
|
|
resulting in oozing of blood in the tissue forming a collection of blood (hematoma).
|
|
|
usually, small collections of blood get absorbed of their own. however, this may
|
|
|
not happen in cases where the blood clotting is hampered by the intake of blood
|
|
|
thinners as is in your case and the same might also get infected causing more
|
|
|
pain due to an abscess. if i were your doctor, i would consult your physician
|
|
|
who started your blood thinning agent for consideration of discontinuing these
|
|
|
medicines for some time till it heals up. if it does not even then, i would refer
|
|
|
you to a general surgeon for a clinical examination and further management. i
|
|
|
hope this information would help you in discussing with your family physician/treating
|
|
|
doctor in further management of your problem. please do not hesitate to ask in
|
|
|
case of any further doubts. thanks for choosing chatbot to clear doubts on your
|
|
|
health problems. wishing you an early recovery. chatbot. if i were your doctor,
|
|
|
- Effects of the psychoactive compounds in green tea on risky decision-making.
|
|
|
- source_sentence: Answer this question truthfully
|
|
|
sentences:
|
|
|
- Laparoscopic stomach-partitioning gastrojejunostomy with reduced-port techniques
|
|
|
for unresectable distal gastric cancer.
|
|
|
- hi, thanks for posting the query, i would suggest you to get an x-ray of the tooth
|
|
|
piece left in the socket, according to your clinical symptoms i suppose that you
|
|
|
might have developed an infection in the region which is radiating in the nearby
|
|
|
tooth region giving you such feeling, also take course of antibiotics and analgesics,
|
|
|
maintain a good oral hygiene, take lukewarm saline and antiseptic mouthwash rinses,
|
|
|
take an appointment with oral surgeon and get the piece removed. hope you find
|
|
|
this as helpful, take care!
|
|
|
- If you feel you are developing symptoms suggestive of Pneumocystis pneumonia contact
|
|
|
your health professional.
|
|
|
- source_sentence: If you are a doctor, please answer the medical questions based
|
|
|
on the patient's description.
|
|
|
sentences:
|
|
|
- Hazard control for communicable disease transport at Ornge
|
|
|
- hello and thank you for asking chatbot, i understand your concern. you are probably
|
|
|
experiencing low blood pressure when you stand up, called orthostatic hypotension.
|
|
|
as a result, not enough blood reaches your brain, and you feel lightheaded or
|
|
|
dizzy. here are some advices
|
|
|
- hi, thank you for posting your query. i have noted your symptoms. these are suggestive
|
|
|
of sciatica, or nerve compression in the lower back region due to slipped disc
|
|
|
in that location. disc prolapse leads to compression of the nerves, resulting
|
|
|
in low back pain, leg pain and tingling. symptoms may increase on walking. the
|
|
|
diagnosis can be confirmed by doing mri scan of the lumbosacral spine. good medical
|
|
|
treatments are available for this condition. i hope my answer helps. please get
|
|
|
back if you have any follow-up queries or if you require any additional information.
|
|
|
wishing you good health, chatbot. ly/
|
|
|
- source_sentence: Please summerize the given abstract to a title
|
|
|
sentences:
|
|
|
- Gastric mucormycosis with splenic invasion a rare abdominal complication of COVID-19
|
|
|
pneumonia
|
|
|
- 'Russian-Language Mobile Apps for Reducing Alcohol Use: Systematic Search and
|
|
|
Evaluation'
|
|
|
- Peacekeeping after Covid-19
|
|
|
pipeline_tag: sentence-similarity
|
|
|
library_name: sentence-transformers
|
|
|
---
|
|
|
|
|
|
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
|
|
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
|
|
|
|
|
|
## Model Details
|
|
|
|
|
|
### Model Description
|
|
|
- **Model Type:** Sentence Transformer
|
|
|
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 86741b4e3f5cb7765a600d3a3d55a0f6a6cb443d -->
|
|
|
- **Maximum Sequence Length:** 128 tokens
|
|
|
- **Output Dimensionality:** 384 dimensions
|
|
|
- **Similarity Function:** Cosine Similarity
|
|
|
<!-- - **Training Dataset:** Unknown -->
|
|
|
<!-- - **Language:** Unknown -->
|
|
|
<!-- - **License:** Unknown -->
|
|
|
|
|
|
### Model Sources
|
|
|
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
|
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
|
|
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
|
|
|
|
|
### Full Model Architecture
|
|
|
|
|
|
```
|
|
|
SentenceTransformer(
|
|
|
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
|
|
|
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
|
|
)
|
|
|
```
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
### Direct Usage (Sentence Transformers)
|
|
|
|
|
|
First install the Sentence Transformers library:
|
|
|
|
|
|
```bash
|
|
|
pip install -U sentence-transformers
|
|
|
```
|
|
|
|
|
|
Then you can load this model and run inference.
|
|
|
```python
|
|
|
from sentence_transformers import SentenceTransformer
|
|
|
|
|
|
# Download from the 🤗 Hub
|
|
|
model = SentenceTransformer("sentence_transformers_model_id")
|
|
|
# Run inference
|
|
|
sentences = [
|
|
|
'Please summerize the given abstract to a title',
|
|
|
'Peacekeeping after Covid-19',
|
|
|
'Russian-Language Mobile Apps for Reducing Alcohol Use: Systematic Search and Evaluation',
|
|
|
]
|
|
|
embeddings = model.encode(sentences)
|
|
|
print(embeddings.shape)
|
|
|
# [3, 384]
|
|
|
|
|
|
# Get the similarity scores for the embeddings
|
|
|
similarities = model.similarity(embeddings, embeddings)
|
|
|
print(similarities.shape)
|
|
|
# [3, 3]
|
|
|
```
|
|
|
|
|
|
<!--
|
|
|
### Direct Usage (Transformers)
|
|
|
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary>
|
|
|
|
|
|
</details>
|
|
|
-->
|
|
|
|
|
|
<!--
|
|
|
### Downstream Usage (Sentence Transformers)
|
|
|
|
|
|
You can finetune this model on your own dataset.
|
|
|
|
|
|
<details><summary>Click to expand</summary>
|
|
|
|
|
|
</details>
|
|
|
-->
|
|
|
|
|
|
<!--
|
|
|
### Out-of-Scope Use
|
|
|
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
|
|
-->
|
|
|
|
|
|
<!--
|
|
|
## Bias, Risks and Limitations
|
|
|
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
|
|
-->
|
|
|
|
|
|
<!--
|
|
|
### Recommendations
|
|
|
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
|
|
-->
|
|
|
|
|
|
## Training Details
|
|
|
|
|
|
### Training Dataset
|
|
|
|
|
|
#### Unnamed Dataset
|
|
|
|
|
|
* Size: 20,000 training samples
|
|
|
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
|
|
|
* Approximate statistics based on the first 1000 samples:
|
|
|
| | sentence_0 | sentence_1 | label |
|
|
|
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
|
|
|
| type | string | string | float |
|
|
|
| details | <ul><li>min: 7 tokens</li><li>mean: 15.87 tokens</li><li>max: 81 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 77.94 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
|
|
|
* Samples:
|
|
|
| sentence_0 | sentence_1 | label |
|
|
|
|:------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
|
|
|
| <code>Please summerize the given abstract to a title</code> | <code>Impact of National Containment Measures on Decelerating the Increase in Daily New Cases of COVID-19 in 54 Countries and 4 Epicenters of the Pandemic: Comparative Observational Study</code> | <code>1.0</code> |
|
|
|
| <code>Answer this question truthfully</code> | <code>Intracranial hypertension is defined as ICP greater than 20 mmHg. This condition occurs when there is increased pressure inside the skull, which can cause a range of symptoms and potentially lead to serious complications such as brain damage or herniation. Intracranial hypertension can be caused by a variety of factors, including head injury, brain tumors, infections, and certain medications. Treatment options may include medications to reduce pressure, surgery to relieve pressure or address underlying causes, or other supportive measures to manage symptoms and prevent complications.</code> | <code>1.0</code> |
|
|
|
| <code>Answer this question truthfully</code> | <code>The bone marrow is a rapidly proliferating population of cells that produces blood cells, including white blood cells, red blood cells, and platelets. 6-mercaptopurine and azathioprine are medications that are commonly used to treat autoimmune diseases and some types of cancer. However, because these drugs interfere with the production of new cells, they can also cause myelosuppression, which is a condition in which the bone marrow produces fewer blood cells than normal. This can lead to a variety of symptoms, including fatigue, weakness, and an increased risk of infection.</code> | <code>1.0</code> |
|
|
|
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
|
|
|
```json
|
|
|
{
|
|
|
"loss_fct": "torch.nn.modules.loss.MSELoss"
|
|
|
}
|
|
|
```
|
|
|
|
|
|
### Training Hyperparameters
|
|
|
#### Non-Default Hyperparameters
|
|
|
|
|
|
- `per_device_train_batch_size`: 16
|
|
|
- `per_device_eval_batch_size`: 16
|
|
|
- `num_train_epochs`: 1
|
|
|
- `multi_dataset_batch_sampler`: round_robin
|
|
|
|
|
|
#### All Hyperparameters
|
|
|
<details><summary>Click to expand</summary>
|
|
|
|
|
|
- `overwrite_output_dir`: False
|
|
|
- `do_predict`: False
|
|
|
- `eval_strategy`: no
|
|
|
- `prediction_loss_only`: True
|
|
|
- `per_device_train_batch_size`: 16
|
|
|
- `per_device_eval_batch_size`: 16
|
|
|
- `per_gpu_train_batch_size`: None
|
|
|
- `per_gpu_eval_batch_size`: None
|
|
|
- `gradient_accumulation_steps`: 1
|
|
|
- `eval_accumulation_steps`: None
|
|
|
- `torch_empty_cache_steps`: None
|
|
|
- `learning_rate`: 5e-05
|
|
|
- `weight_decay`: 0.0
|
|
|
- `adam_beta1`: 0.9
|
|
|
- `adam_beta2`: 0.999
|
|
|
- `adam_epsilon`: 1e-08
|
|
|
- `max_grad_norm`: 1
|
|
|
- `num_train_epochs`: 1
|
|
|
- `max_steps`: -1
|
|
|
- `lr_scheduler_type`: linear
|
|
|
- `lr_scheduler_kwargs`: {}
|
|
|
- `warmup_ratio`: 0.0
|
|
|
- `warmup_steps`: 0
|
|
|
- `log_level`: passive
|
|
|
- `log_level_replica`: warning
|
|
|
- `log_on_each_node`: True
|
|
|
- `logging_nan_inf_filter`: True
|
|
|
- `save_safetensors`: True
|
|
|
- `save_on_each_node`: False
|
|
|
- `save_only_model`: False
|
|
|
- `restore_callback_states_from_checkpoint`: False
|
|
|
- `no_cuda`: False
|
|
|
- `use_cpu`: False
|
|
|
- `use_mps_device`: False
|
|
|
- `seed`: 42
|
|
|
- `data_seed`: None
|
|
|
- `jit_mode_eval`: False
|
|
|
- `use_ipex`: False
|
|
|
- `bf16`: False
|
|
|
- `fp16`: False
|
|
|
- `fp16_opt_level`: O1
|
|
|
- `half_precision_backend`: auto
|
|
|
- `bf16_full_eval`: False
|
|
|
- `fp16_full_eval`: False
|
|
|
- `tf32`: None
|
|
|
- `local_rank`: 0
|
|
|
- `ddp_backend`: None
|
|
|
- `tpu_num_cores`: None
|
|
|
- `tpu_metrics_debug`: False
|
|
|
- `debug`: []
|
|
|
- `dataloader_drop_last`: False
|
|
|
- `dataloader_num_workers`: 0
|
|
|
- `dataloader_prefetch_factor`: None
|
|
|
- `past_index`: -1
|
|
|
- `disable_tqdm`: False
|
|
|
- `remove_unused_columns`: True
|
|
|
- `label_names`: None
|
|
|
- `load_best_model_at_end`: False
|
|
|
- `ignore_data_skip`: False
|
|
|
- `fsdp`: []
|
|
|
- `fsdp_min_num_params`: 0
|
|
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
|
|
|
- `tp_size`: 0
|
|
|
- `fsdp_transformer_layer_cls_to_wrap`: None
|
|
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
|
|
|
- `deepspeed`: None
|
|
|
- `label_smoothing_factor`: 0.0
|
|
|
- `optim`: adamw_torch
|
|
|
- `optim_args`: None
|
|
|
- `adafactor`: False
|
|
|
- `group_by_length`: False
|
|
|
- `length_column_name`: length
|
|
|
- `ddp_find_unused_parameters`: None
|
|
|
- `ddp_bucket_cap_mb`: None
|
|
|
- `ddp_broadcast_buffers`: False
|
|
|
- `dataloader_pin_memory`: True
|
|
|
- `dataloader_persistent_workers`: False
|
|
|
- `skip_memory_metrics`: True
|
|
|
- `use_legacy_prediction_loop`: False
|
|
|
- `push_to_hub`: False
|
|
|
- `resume_from_checkpoint`: None
|
|
|
- `hub_model_id`: None
|
|
|
- `hub_strategy`: every_save
|
|
|
- `hub_private_repo`: None
|
|
|
- `hub_always_push`: False
|
|
|
- `gradient_checkpointing`: False
|
|
|
- `gradient_checkpointing_kwargs`: None
|
|
|
- `include_inputs_for_metrics`: False
|
|
|
- `include_for_metrics`: []
|
|
|
- `eval_do_concat_batches`: True
|
|
|
- `fp16_backend`: auto
|
|
|
- `push_to_hub_model_id`: None
|
|
|
- `push_to_hub_organization`: None
|
|
|
- `mp_parameters`:
|
|
|
- `auto_find_batch_size`: False
|
|
|
- `full_determinism`: False
|
|
|
- `torchdynamo`: None
|
|
|
- `ray_scope`: last
|
|
|
- `ddp_timeout`: 1800
|
|
|
- `torch_compile`: False
|
|
|
- `torch_compile_backend`: None
|
|
|
- `torch_compile_mode`: None
|
|
|
- `include_tokens_per_second`: False
|
|
|
- `include_num_input_tokens_seen`: False
|
|
|
- `neftune_noise_alpha`: None
|
|
|
- `optim_target_modules`: None
|
|
|
- `batch_eval_metrics`: False
|
|
|
- `eval_on_start`: False
|
|
|
- `use_liger_kernel`: False
|
|
|
- `eval_use_gather_object`: False
|
|
|
- `average_tokens_across_devices`: False
|
|
|
- `prompts`: None
|
|
|
- `batch_sampler`: batch_sampler
|
|
|
- `multi_dataset_batch_sampler`: round_robin
|
|
|
|
|
|
</details>
|
|
|
|
|
|
### Training Logs
|
|
|
| Epoch | Step | Training Loss |
|
|
|
|:-----:|:----:|:-------------:|
|
|
|
| 0.4 | 500 | 0.4093 |
|
|
|
| 0.8 | 1000 | 0.0074 |
|
|
|
|
|
|
|
|
|
### Framework Versions
|
|
|
- Python: 3.11.12
|
|
|
- Sentence Transformers: 3.4.1
|
|
|
- Transformers: 4.51.3
|
|
|
- PyTorch: 2.6.0+cu124
|
|
|
- Accelerate: 1.6.0
|
|
|
- Datasets: 3.5.1
|
|
|
- Tokenizers: 0.21.1
|
|
|
|
|
|
## Citation
|
|
|
|
|
|
### BibTeX
|
|
|
|
|
|
#### Sentence Transformers
|
|
|
```bibtex
|
|
|
@inproceedings{reimers-2019-sentence-bert,
|
|
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
|
|
author = "Reimers, Nils and Gurevych, Iryna",
|
|
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
|
|
month = "11",
|
|
|
year = "2019",
|
|
|
publisher = "Association for Computational Linguistics",
|
|
|
url = "https://arxiv.org/abs/1908.10084",
|
|
|
}
|
|
|
```
|
|
|
|
|
|
<!--
|
|
|
## Glossary
|
|
|
|
|
|
*Clearly define terms in order to be accessible across audiences.*
|
|
|
-->
|
|
|
|
|
|
<!--
|
|
|
## Model Card Authors
|
|
|
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
|
|
-->
|
|
|
|
|
|
<!--
|
|
|
## Model Card Contact
|
|
|
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
|
|
--> |