Updated model description
Browse files
README.md
CHANGED
|
@@ -27,11 +27,11 @@ A fine-tuned RoBERTa model designed for an Natural Language Inference (NLI) task
|
|
| 27 |
|
| 28 |
This model builds upon the roberta-base architecture, adding a multi-layer classification head for NLI. It computes average pooled representations of premise and hypothesis tokens (identified via `token_type_ids`) and concatenates them before passing through additional linear and non-linear layers. The final output is used to classify the pair of sentences into one of three classes.
|
| 29 |
|
| 30 |
-
- **Developed by:**
|
| 31 |
-
- **Language(s):**
|
| 32 |
-
- **Model type:**
|
| 33 |
-
- **Model architecture:**
|
| 34 |
-
- **Finetuned from model:**
|
| 35 |
|
| 36 |
### Model Resources
|
| 37 |
|
|
|
|
| 27 |
|
| 28 |
This model builds upon the roberta-base architecture, adding a multi-layer classification head for NLI. It computes average pooled representations of premise and hypothesis tokens (identified via `token_type_ids`) and concatenates them before passing through additional linear and non-linear layers. The final output is used to classify the pair of sentences into one of three classes.
|
| 29 |
|
| 30 |
+
- **Developed by:** Dev Soneji and Patrick Mermelstein Lyons
|
| 31 |
+
- **Language(s):** English
|
| 32 |
+
- **Model type:** Supervised
|
| 33 |
+
- **Model architecture:** RoBERTa encoder with a multi-layer classification head
|
| 34 |
+
- **Finetuned from model:** roberta-base
|
| 35 |
|
| 36 |
### Model Resources
|
| 37 |
|