Update README.md
Browse files
README.md
CHANGED
|
@@ -293,10 +293,10 @@ pipeline_tag: zero-shot-classification
|
|
| 293 |
|
| 294 |
# Model Card for Model ID
|
| 295 |
|
| 296 |
-
deberta-v3-base with context length of 1280 fine-tuned on tasksource for
|
| 297 |
Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
|
| 298 |
|
| 299 |
-
This model is suitable for long context NLI or
|
| 300 |
|
| 301 |
This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
|
| 302 |
- Zero-shot entailment-based classification for arbitrary labels [ZS].
|
|
|
|
| 293 |
|
| 294 |
# Model Card for Model ID
|
| 295 |
|
| 296 |
+
deberta-v3-base with context length of 1280 fine-tuned on tasksource for 250k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
|
| 297 |
Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
|
| 298 |
|
| 299 |
+
This model is suitable for long context NLI or as a backbone for reward models or classifiers fine-tuning.
|
| 300 |
|
| 301 |
This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
|
| 302 |
- Zero-shot entailment-based classification for arbitrary labels [ZS].
|