Update README.md
Browse files
README.md
CHANGED
|
@@ -127,7 +127,7 @@ A cross-entropy loss is used for tuning the model.
|
|
| 127 |
## Model variations
|
| 128 |
There are three versions of models released. The details are:
|
| 129 |
|
| 130 |
-
| Model | Backbone | #params | lang | acc | Speed | #
|
| 131 |
|------------|-----------|----------|-------|-------|----|-------------|
|
| 132 |
| [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | En | Low | High | 20.48M |
|
| 133 |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | En | Medium | Medium | 5.12M |
|
|
|
|
| 127 |
## Model variations
|
| 128 |
There are three versions of models released. The details are:
|
| 129 |
|
| 130 |
+
| Model | Backbone | #params | lang | acc | Speed | #Train
|
| 131 |
|------------|-----------|----------|-------|-------|----|-------------|
|
| 132 |
| [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | En | Low | High | 20.48M |
|
| 133 |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | En | Medium | Medium | 5.12M |
|