---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:3000
- loss:BatchAllTripletLoss
base_model: microsoft/mpnet-base
widget:
- source_sentence: what am i supposed to do if i lost my luggage
sentences:
- do i need a visa if i go there
- why did you freeze my bank account
- tell my bank that i'm travelling to france in 2 days
- source_sentence: can you suggest some of the most popular travel destination
sentences:
- what is the total of my repair bill
- could you tell me my bill's minimum payment
- can you get me a car rental for march 1st to 3rd in seattle, and i'd like a sedan
if possible
- source_sentence: is there a minimum amount accepted
sentences:
- am i going to need a visa for traveling to canada
- submit payment to duke energy for my electric bill
- let me know chase's routing number
- source_sentence: my account appears to be blocked and i don't know why
sentences:
- how do you say hello in japanese
- how much is due on the gas bill
- how much was my last transaction for
- source_sentence: are there any travel alerts for juarez
sentences:
- i am now out of checks, how do i order new ones
- lowest amount for cable bill
- how much interest do i get on my citizen's savings account
datasets:
- contemmcm/clinc150
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on microsoft/mpnet-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [clinc150](https://huggingface.co/datasets/contemmcm/clinc150) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [clinc150](https://huggingface.co/datasets/contemmcm/clinc150)
- **Language:** en
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'MPNetModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bnoland/mpnet-base-clinc-subset")
# Run inference
sentences = [
'are there any travel alerts for juarez',
"how much interest do i get on my citizen's savings account",
'lowest amount for cable bill',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7056, 0.6717],
# [0.7056, 1.0000, 0.7377],
# [0.6717, 0.7377, 1.0000]])
```
## Training Details
### Training Dataset
#### clinc150
* Dataset: [clinc150](https://huggingface.co/datasets/contemmcm/clinc150) at [2bbb9af](https://huggingface.co/datasets/contemmcm/clinc150/tree/2bbb9afebdafb9b9f6719250310bfcf3b1e8f666)
* Size: 3,000 training samples
* Columns: text and label
* Approximate statistics based on the first 1000 samples:
| | text | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | int |
| details |
is there enough money in my bank of hawaii for vacation | 12 |
| i need to let my bank know i am visiting asia soon | 77 |
| what's bank of america's routing number | 2 |
* Loss: [BatchAllTripletLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchalltripletloss)
### Evaluation Dataset
#### clinc150
* Dataset: [clinc150](https://huggingface.co/datasets/contemmcm/clinc150) at [2bbb9af](https://huggingface.co/datasets/contemmcm/clinc150/tree/2bbb9afebdafb9b9f6719250310bfcf3b1e8f666)
* Size: 600 evaluation samples
* Columns: text and label
* Approximate statistics based on the first 600 samples:
| | text | label |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | int |
| details | was my last transaction at walmart | 14 |
| what interest rate is us bank giving me on my acount | 7 |
| look up carry-on rules for american airlines | 89 |
* Loss: [BatchAllTripletLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchalltripletloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_steps`: 10
- `fp16`: True
- `batch_sampler`: group_by_label
#### All Hyperparameters