---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:944
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: A hash function $h$ is collision-resistant if\dots
sentences:
- 6471::[9216:9728]
- 13251::[1536:2048]
- 5817::[2688:3200]
- source_sentence: "Which statement about \textit{black-box} adversarial attacks is\
\ true:"
sentences:
- 10047::[7680:8192]
- 13287::[384:896]
- 7076::[5376:5888]
- source_sentence: What is the content of the inode?
sentences:
- 8467::[0:512]
- 12744::[3840:4352]
- 12512::[14592:15104]
- source_sentence: (Backpropagation) Training via the backpropagation algorithm always
learns a globally optimal neural network if there is only one hidden layer and
we run an infinite number of iterations and decrease the step size appropriately
over time.
sentences:
- 12744::[3456:3968]
- 12583::[38784:39296]
- 3455::[4608:5120]
- source_sentence: Which of the following statements about testing is/are correct?
sentences:
- 12555::[6912:7424]
- 13136::[1536:2048]
- 12842::[0:512]
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("azizdh00/MNLP_M3_document_encoder")
# Run inference
sentences = [
'Which of the following statements about testing is/are correct?',
'12555::[6912:7424]',
'12842::[0:512]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 944 training samples
* Columns: sentence_0 and sentence_1
* Approximate statistics based on the first 944 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details |
Which of the following statements is correct? | 7105::[768:1280] |
| What is WRONGÂ regarding the Transformer model? | 7490::[1152:1664] |
| Which of the following attack vectors apply to mobile Android systems? | 1284::[9216:9728] |
* Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters