---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:102836
- loss:CrossEntropyLoss
base_model: cross-encoder/nli-deberta-v3-base
datasets:
- software-si/horeca-nli
pipeline_tag: text-classification
library_name: sentence-transformers
license: apache-2.0
language:
- en
---
# CrossEncoder based on cross-encoder/nli-deberta-v3-base
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/nli-deberta-v3-base](https://huggingface.co/cross-encoder/nli-deberta-v3-base) on the [horeca-nli](https://huggingface.co/datasets/software-si/horeca-nli) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text pair classification.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [cross-encoder/nli-deberta-v3-base](https://huggingface.co/cross-encoder/nli-deberta-v3-base)
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 3 labels
- **Training Dataset:**
- [horeca-nli](https://huggingface.co/datasets/software-si/horeca-nli)
## 🧾 Input / Output
This a model for Natural Language Inference NLI. it take a premises and an hypothesis as input, and return a classification of the relationship between the two input sentence
Possible outputs are: contradiction, entailment, neutral
**Example:**
- premises:
`kitchen eighty centimeters wide, deep 70 cm placed on closed compartment`
- hypothesis:
`the kitchen is placed on open shelf`
- Output:
`contradiction`
---
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("software-si/kitchen-nli")
# Get scores for pairs of texts
pairs = [
['cooking unit with square plates on compartment with doors', 'the depth of the kitchen is 70 centimeters'],
['cooking unit with 2 electric plates, on compartment with doors', 'the kitchen is placed on top'],
['kitchen module in top version deep 70 cm eighty centimeters wide,', 'the kitchen is placed on cabinet'],
['cooking unit wide 80 cm, with a depth of 90 centimeters, placed on closed compartment', 'the kitchen has a width of 40 cm'],
['kitchen with gas cooking, with gas oven, one hundred twenty centimeters wide,', 'the layout of the kitchen is top'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5, 3)
label_mapping = ['contradiction', 'entailment', 'neutral']
```
## Training Details
### Training Dataset
#### horeca-nli
* Dataset: [horeca-nli](https://huggingface.co/datasets/software-si/horeca-nli) at [a6bd6a4](https://huggingface.co/datasets/software-si/horeca-nli/tree/a6bd6a4e3cfa88c4081a4a0ff814f92d00dcf463)
* Size: 102,836 training samples
* Columns: premises, hypothesis, and labels
* Approximate statistics based on the first 1000 samples:
| | premises | hypothesis | labels |
|:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details |
kitchen eighty centimeters wide, deep 70 cm placed on closed compartment | the kitchen is forty centimeters wide | 0 |
| cooking unit placed on cabinet deep 90 cm, gas supply, | the kitchen is placed on open shelf | 2 |
| cooking unit wide 40 cm, powered by electricity with the square plates | the kitchen measures one hundred twenty centimeters in width | 0 |
* Loss: [CrossEntropyLoss](https://sbert.net/docs/package_reference/cross_encoder/losses.html#crossentropyloss)
### Evaluation Dataset
#### horeca-nli
* Dataset: [horeca-nli](https://huggingface.co/datasets/software-si/horeca-nli) at [a6bd6a4](https://huggingface.co/datasets/software-si/horeca-nli/tree/a6bd6a4e3cfa88c4081a4a0ff814f92d00dcf463)
* Size: 30,851 evaluation samples
* Columns: premises, hypothesis, and labels
* Approximate statistics based on the first 1000 samples:
| | premises | hypothesis | labels |
|:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | cooking unit with square plates on compartment with doors | the depth of the kitchen is 70 centimeters | 2 |
| cooking unit with 2 electric plates, on compartment with doors | the kitchen is placed on top | 2 |
| kitchen module in top version deep 70 cm eighty centimeters wide, | the kitchen is placed on cabinet | 0 |
* Loss: [CrossEntropyLoss](https://sbert.net/docs/package_reference/cross_encoder/losses.html#crossentropyloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `num_train_epochs`: 1
- `warmup_steps`: 10283
- `bf16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters