Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ The approach is simple:
|
|
| 10 |
2. Finetune several SOTA transformers of different sizes (20m parameters to 300m parameters) on the combined data.
|
| 11 |
3. Evaluate on challenging NLI datasets.
|
| 12 |
|
| 13 |
-
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. It is based on [microsoft/deberta-v3-
|
| 14 |
|
| 15 |
### Data
|
| 16 |
20+ NLI datasets were combined to train a binary classification model. The `contradiction` and `neutral` labels were combined to form a `non-entailment` class.
|
|
|
|
| 10 |
2. Finetune several SOTA transformers of different sizes (20m parameters to 300m parameters) on the combined data.
|
| 11 |
3. Evaluate on challenging NLI datasets.
|
| 12 |
|
| 13 |
+
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. It is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base).
|
| 14 |
|
| 15 |
### Data
|
| 16 |
20+ NLI datasets were combined to train a binary classification model. The `contradiction` and `neutral` labels were combined to form a `non-entailment` class.
|