lukecq commited on
Commit
cbb8b05
·
1 Parent(s): a3e920f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -109,7 +109,7 @@ It was introduced in the paper [Zero-Shot Text Classification via Self-Supervise
109
  Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
110
  and first released in [this repository](https://github.com/DAMO-NLP-SG/SSTuning).
111
 
112
- The model backbone is RoBERTa-base.
113
 
114
  ## Model description
115
 
@@ -127,14 +127,15 @@ A cross-entropy loss is used for tuning the model.
127
  ## Model variations
128
  There are three versions of models released. The details are:
129
 
130
- | Model | Backbone | #params | accuracy | Speed | #Training data
131
  |------------|-----------|----------|-------|-------|----|
132
- | [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | Low | High | 20.48M |
133
- | [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
134
- | [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
135
- | [zero-shot-classify-SSTuning-XLM-R](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 278M | - | - | 20.48M |
136
 
137
  Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as XLM-R supports.
 
138
 
139
  ## Intended uses & limitations
140
  The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.
 
109
  Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
110
  and first released in [this repository](https://github.com/DAMO-NLP-SG/SSTuning).
111
 
112
+ The model backbone is xlm-roberta-base.
113
 
114
  ## Model description
115
 
 
127
  ## Model variations
128
  There are three versions of models released. The details are:
129
 
130
+ | Model | Backbone | #params | language | accuracy | Speed | #Training data
131
  |------------|-----------|----------|-------|-------|----|
132
+ | [zero-shot-classify-SSTuning-base](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-base) | [roberta-base](https://huggingface.co/roberta-base) | 125M | En | Low | High | 20.48M |
133
+ | [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | En | Medium | Medium | 5.12M |
134
+ | [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | En | High | Low| 5.12M |
135
+ | [zero-shot-classify-SSTuning-XLM-R](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 278M | En | - | - | 20.48M |
136
 
137
  Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as XLM-R supports.
138
+ Please check [this repository](https://github.com/DAMO-NLP-SG/SSTuning) for the performance of each model.
139
 
140
  ## Intended uses & limitations
141
  The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.