index int64 0 22.3k | modelId stringlengths 8 111 | label list | readme stringlengths 0 385k |
|---|---|---|---|
984 | ethanyt/guwen-sent | [
"Neg",
"ImpNeg",
"Nerual",
"ImpPos",
"Pos"
] | ---
language:
- "zh"
thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
- "sentiment classificatio"
license: "apache-2.0"
pipeline_tag: "text-classification"
widget:
- text: "滚滚长江东逝水,浪花淘尽英雄"
- text: "寻寻觅觅,冷冷清清,凄凄惨惨戚戚"
- text: "执手相看泪眼,竟无语凝噎,念去去,千里烟波,暮霭沉沉楚天阔。"
- text: "忽如一夜春风来,干树万树梨花开"
---
# Guwen Sent
A Classical Chinese Poem Sentiment Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a> |
985 | evandrodiniz/autonlp-api-boamente-417310788 | [
"negative",
"positive"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- evandrodiniz/autonlp-data-api-boamente
co2_eq_emissions: 6.826886567147602
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310788
- CO2 Emissions (in grams): 6.826886567147602
## Validation Metrics
- Loss: 0.20949310064315796
- Accuracy: 0.9578392621870883
- Precision: 0.9476190476190476
- Recall: 0.9045454545454545
- AUC: 0.9714032720526227
- F1: 0.9255813953488372
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/evandrodiniz/autonlp-api-boamente-417310788
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("evandrodiniz/autonlp-api-boamente-417310788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("evandrodiniz/autonlp-api-boamente-417310788", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
986 | evandrodiniz/autonlp-api-boamente-417310793 | [
"negative",
"positive"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- evandrodiniz/autonlp-data-api-boamente
co2_eq_emissions: 9.446754273734577
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310793
- CO2 Emissions (in grams): 9.446754273734577
## Validation Metrics
- Loss: 0.25755178928375244
- Accuracy: 0.9407114624505929
- Precision: 0.8600823045267489
- Recall: 0.95
- AUC: 0.9732501264968797
- F1: 0.9028077753779697
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/evandrodiniz/autonlp-api-boamente-417310793
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("evandrodiniz/autonlp-api-boamente-417310793", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("evandrodiniz/autonlp-api-boamente-417310793", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
988 | fabriceyhc/bert-base-uncased-ag_news | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- ag_news
metrics:
- accuracy
model-index:
- name: bert-base-uncased-ag_news
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ag_news
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 7425
- training_steps: 74250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5773 | 0.13 | 2000 | 0.3627 | 0.8875 |
| 0.3101 | 0.27 | 4000 | 0.2938 | 0.9208 |
| 0.3076 | 0.4 | 6000 | 0.3114 | 0.9092 |
| 0.3114 | 0.54 | 8000 | 0.4545 | 0.9008 |
| 0.3154 | 0.67 | 10000 | 0.3875 | 0.9083 |
| 0.3095 | 0.81 | 12000 | 0.3390 | 0.9142 |
| 0.2948 | 0.94 | 14000 | 0.3341 | 0.9133 |
| 0.2557 | 1.08 | 16000 | 0.4573 | 0.9092 |
| 0.258 | 1.21 | 18000 | 0.3356 | 0.9217 |
| 0.2455 | 1.35 | 20000 | 0.3348 | 0.9283 |
| 0.2361 | 1.48 | 22000 | 0.3218 | 0.93 |
| 0.254 | 1.62 | 24000 | 0.3814 | 0.9033 |
| 0.2528 | 1.75 | 26000 | 0.3628 | 0.9158 |
| 0.2282 | 1.89 | 28000 | 0.3302 | 0.9308 |
| 0.224 | 2.02 | 30000 | 0.3967 | 0.9225 |
| 0.174 | 2.15 | 32000 | 0.3669 | 0.9333 |
| 0.1848 | 2.29 | 34000 | 0.3435 | 0.9283 |
| 0.19 | 2.42 | 36000 | 0.3552 | 0.93 |
| 0.1865 | 2.56 | 38000 | 0.3996 | 0.9258 |
| 0.1877 | 2.69 | 40000 | 0.3749 | 0.9258 |
| 0.1951 | 2.83 | 42000 | 0.3963 | 0.9258 |
| 0.1702 | 2.96 | 44000 | 0.3655 | 0.9317 |
| 0.1488 | 3.1 | 46000 | 0.3942 | 0.9292 |
| 0.1231 | 3.23 | 48000 | 0.3998 | 0.9267 |
| 0.1319 | 3.37 | 50000 | 0.4292 | 0.9242 |
| 0.1334 | 3.5 | 52000 | 0.4904 | 0.9192 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
989 | fabriceyhc/bert-base-uncased-amazon_polarity | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: bert-base-uncased-amazon_polarity
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.94647
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9464875
verified: true
- name: Precision
type: precision
value: 0.9528844934702675
verified: true
- name: Recall
type: recall
value: 0.939425
verified: true
- name: AUC
type: auc
value: 0.9863499156250001
verified: true
- name: F1
type: f1
value: 0.9461068798388619
verified: true
- name: loss
type: loss
value: 0.2944573760032654
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-amazon_polarity
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2945
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1782000
- training_steps: 17820000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7155 | 0.0 | 2000 | 0.7060 | 0.4622 |
| 0.7054 | 0.0 | 4000 | 0.6925 | 0.5165 |
| 0.6842 | 0.0 | 6000 | 0.6653 | 0.6116 |
| 0.6375 | 0.0 | 8000 | 0.5721 | 0.7909 |
| 0.4671 | 0.0 | 10000 | 0.3238 | 0.8770 |
| 0.3403 | 0.0 | 12000 | 0.3692 | 0.8861 |
| 0.4162 | 0.0 | 14000 | 0.4560 | 0.8908 |
| 0.4728 | 0.0 | 16000 | 0.5071 | 0.8980 |
| 0.5111 | 0.01 | 18000 | 0.5204 | 0.9015 |
| 0.4792 | 0.01 | 20000 | 0.5193 | 0.9076 |
| 0.544 | 0.01 | 22000 | 0.4835 | 0.9133 |
| 0.4745 | 0.01 | 24000 | 0.4689 | 0.9170 |
| 0.4403 | 0.01 | 26000 | 0.4778 | 0.9177 |
| 0.4405 | 0.01 | 28000 | 0.4754 | 0.9163 |
| 0.4375 | 0.01 | 30000 | 0.4808 | 0.9175 |
| 0.4628 | 0.01 | 32000 | 0.4340 | 0.9244 |
| 0.4488 | 0.01 | 34000 | 0.4162 | 0.9265 |
| 0.4608 | 0.01 | 36000 | 0.4031 | 0.9271 |
| 0.4478 | 0.01 | 38000 | 0.4502 | 0.9253 |
| 0.4237 | 0.01 | 40000 | 0.4087 | 0.9279 |
| 0.4601 | 0.01 | 42000 | 0.4133 | 0.9269 |
| 0.4153 | 0.01 | 44000 | 0.4230 | 0.9306 |
| 0.4096 | 0.01 | 46000 | 0.4108 | 0.9301 |
| 0.4348 | 0.01 | 48000 | 0.4138 | 0.9309 |
| 0.3787 | 0.01 | 50000 | 0.4066 | 0.9324 |
| 0.4172 | 0.01 | 52000 | 0.4812 | 0.9206 |
| 0.3897 | 0.02 | 54000 | 0.4013 | 0.9325 |
| 0.3787 | 0.02 | 56000 | 0.3837 | 0.9344 |
| 0.4253 | 0.02 | 58000 | 0.3925 | 0.9347 |
| 0.3959 | 0.02 | 60000 | 0.3907 | 0.9353 |
| 0.4402 | 0.02 | 62000 | 0.3708 | 0.9341 |
| 0.4115 | 0.02 | 64000 | 0.3477 | 0.9361 |
| 0.3876 | 0.02 | 66000 | 0.3634 | 0.9373 |
| 0.4286 | 0.02 | 68000 | 0.3778 | 0.9378 |
| 0.422 | 0.02 | 70000 | 0.3540 | 0.9361 |
| 0.3732 | 0.02 | 72000 | 0.3853 | 0.9378 |
| 0.3641 | 0.02 | 74000 | 0.3951 | 0.9386 |
| 0.3701 | 0.02 | 76000 | 0.3582 | 0.9388 |
| 0.4498 | 0.02 | 78000 | 0.3268 | 0.9375 |
| 0.3587 | 0.02 | 80000 | 0.3825 | 0.9401 |
| 0.4474 | 0.02 | 82000 | 0.3155 | 0.9391 |
| 0.3598 | 0.02 | 84000 | 0.3666 | 0.9388 |
| 0.389 | 0.02 | 86000 | 0.3745 | 0.9377 |
| 0.3625 | 0.02 | 88000 | 0.3776 | 0.9387 |
| 0.3511 | 0.03 | 90000 | 0.4275 | 0.9336 |
| 0.3428 | 0.03 | 92000 | 0.4301 | 0.9336 |
| 0.4042 | 0.03 | 94000 | 0.3547 | 0.9359 |
| 0.3583 | 0.03 | 96000 | 0.3763 | 0.9396 |
| 0.3887 | 0.03 | 98000 | 0.3213 | 0.9412 |
| 0.3915 | 0.03 | 100000 | 0.3557 | 0.9409 |
| 0.3378 | 0.03 | 102000 | 0.3627 | 0.9418 |
| 0.349 | 0.03 | 104000 | 0.3614 | 0.9402 |
| 0.3596 | 0.03 | 106000 | 0.3834 | 0.9381 |
| 0.3519 | 0.03 | 108000 | 0.3560 | 0.9421 |
| 0.3598 | 0.03 | 110000 | 0.3485 | 0.9419 |
| 0.3642 | 0.03 | 112000 | 0.3754 | 0.9395 |
| 0.3477 | 0.03 | 114000 | 0.3634 | 0.9426 |
| 0.4202 | 0.03 | 116000 | 0.3071 | 0.9427 |
| 0.3656 | 0.03 | 118000 | 0.3155 | 0.9441 |
| 0.3709 | 0.03 | 120000 | 0.2923 | 0.9433 |
| 0.374 | 0.03 | 122000 | 0.3272 | 0.9441 |
| 0.3142 | 0.03 | 124000 | 0.3348 | 0.9444 |
| 0.3452 | 0.04 | 126000 | 0.3603 | 0.9436 |
| 0.3365 | 0.04 | 128000 | 0.3339 | 0.9434 |
| 0.3353 | 0.04 | 130000 | 0.3471 | 0.9450 |
| 0.343 | 0.04 | 132000 | 0.3508 | 0.9418 |
| 0.3174 | 0.04 | 134000 | 0.3753 | 0.9436 |
| 0.3009 | 0.04 | 136000 | 0.3687 | 0.9422 |
| 0.3785 | 0.04 | 138000 | 0.3818 | 0.9396 |
| 0.3199 | 0.04 | 140000 | 0.3291 | 0.9438 |
| 0.4049 | 0.04 | 142000 | 0.3372 | 0.9454 |
| 0.3435 | 0.04 | 144000 | 0.3315 | 0.9459 |
| 0.3814 | 0.04 | 146000 | 0.3462 | 0.9401 |
| 0.359 | 0.04 | 148000 | 0.3981 | 0.9361 |
| 0.3552 | 0.04 | 150000 | 0.3226 | 0.9469 |
| 0.345 | 0.04 | 152000 | 0.3731 | 0.9384 |
| 0.3228 | 0.04 | 154000 | 0.2956 | 0.9471 |
| 0.3637 | 0.04 | 156000 | 0.2869 | 0.9477 |
| 0.349 | 0.04 | 158000 | 0.3331 | 0.9430 |
| 0.3374 | 0.04 | 160000 | 0.4159 | 0.9340 |
| 0.3718 | 0.05 | 162000 | 0.3241 | 0.9459 |
| 0.315 | 0.05 | 164000 | 0.3544 | 0.9391 |
| 0.3215 | 0.05 | 166000 | 0.3311 | 0.9451 |
| 0.3464 | 0.05 | 168000 | 0.3682 | 0.9453 |
| 0.3495 | 0.05 | 170000 | 0.3193 | 0.9469 |
| 0.305 | 0.05 | 172000 | 0.4132 | 0.9389 |
| 0.3479 | 0.05 | 174000 | 0.3465 | 0.947 |
| 0.3537 | 0.05 | 176000 | 0.3277 | 0.9449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
990 | fabriceyhc/bert-base-uncased-dbpedia_14 | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- dbpedia_14
metrics:
- accuracy
model-index:
- name: bert-base-uncased-dbpedia_14
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dbpedia_14
type: dbpedia_14
args: dbpedia_14
metrics:
- name: Accuracy
type: accuracy
value: 0.9902857142857143
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-dbpedia_14
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the dbpedia_14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0547
- Accuracy: 0.9903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 34650
- training_steps: 346500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7757 | 0.03 | 2000 | 0.2732 | 0.9880 |
| 0.1002 | 0.06 | 4000 | 0.0620 | 0.9891 |
| 0.0547 | 0.09 | 6000 | 0.0723 | 0.9879 |
| 0.0558 | 0.12 | 8000 | 0.0678 | 0.9875 |
| 0.0534 | 0.14 | 10000 | 0.0554 | 0.9896 |
| 0.0632 | 0.17 | 12000 | 0.0670 | 0.9888 |
| 0.0612 | 0.2 | 14000 | 0.0733 | 0.9873 |
| 0.0667 | 0.23 | 16000 | 0.0623 | 0.9896 |
| 0.0636 | 0.26 | 18000 | 0.0836 | 0.9868 |
| 0.0705 | 0.29 | 20000 | 0.0776 | 0.9855 |
| 0.0726 | 0.32 | 22000 | 0.0805 | 0.9861 |
| 0.0778 | 0.35 | 24000 | 0.0713 | 0.9870 |
| 0.0713 | 0.38 | 26000 | 0.1277 | 0.9805 |
| 0.0965 | 0.4 | 28000 | 0.0810 | 0.9855 |
| 0.0881 | 0.43 | 30000 | 0.0910 | 0.985 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
991 | fabriceyhc/bert-base-uncased-imdb | [
"neg",
"pos"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: bert-base-uncased-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.91264
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4942
- Accuracy: 0.9126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1546
- training_steps: 15468
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3952 | 0.65 | 2000 | 0.4012 | 0.86 |
| 0.2954 | 1.29 | 4000 | 0.4535 | 0.892 |
| 0.2595 | 1.94 | 6000 | 0.4320 | 0.892 |
| 0.1516 | 2.59 | 8000 | 0.5309 | 0.896 |
| 0.1167 | 3.23 | 10000 | 0.4070 | 0.928 |
| 0.0624 | 3.88 | 12000 | 0.5055 | 0.908 |
| 0.0329 | 4.52 | 14000 | 0.4342 | 0.92 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
992 | fabriceyhc/bert-base-uncased-yahoo_answers_topics | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- yahoo_answers_topics
metrics:
- accuracy
model-index:
- name: bert-base-uncased-yahoo_answers_topics
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yahoo_answers_topics
type: yahoo_answers_topics
args: yahoo_answers_topics
metrics:
- name: Accuracy
type: accuracy
value: 0.7499166666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-yahoo_answers_topics
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yahoo_answers_topics dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8092
- Accuracy: 0.7499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 86625
- training_steps: 866250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.162 | 0.01 | 2000 | 1.7444 | 0.5681 |
| 1.3126 | 0.02 | 4000 | 1.0081 | 0.7054 |
| 0.9592 | 0.03 | 6000 | 0.9021 | 0.7234 |
| 0.8903 | 0.05 | 8000 | 0.8827 | 0.7276 |
| 0.8685 | 0.06 | 10000 | 0.8540 | 0.7341 |
| 0.8422 | 0.07 | 12000 | 0.8547 | 0.7365 |
| 0.8535 | 0.08 | 14000 | 0.8264 | 0.7372 |
| 0.8178 | 0.09 | 16000 | 0.8331 | 0.7389 |
| 0.8325 | 0.1 | 18000 | 0.8242 | 0.7411 |
| 0.8181 | 0.12 | 20000 | 0.8356 | 0.7437 |
| 0.8171 | 0.13 | 22000 | 0.8090 | 0.7451 |
| 0.8092 | 0.14 | 24000 | 0.8469 | 0.7392 |
| 0.8057 | 0.15 | 26000 | 0.8185 | 0.7478 |
| 0.8085 | 0.16 | 28000 | 0.8090 | 0.7467 |
| 0.8229 | 0.17 | 30000 | 0.8225 | 0.7417 |
| 0.8151 | 0.18 | 32000 | 0.8262 | 0.7419 |
| 0.81 | 0.2 | 34000 | 0.8149 | 0.7383 |
| 0.8073 | 0.21 | 36000 | 0.8225 | 0.7441 |
| 0.816 | 0.22 | 38000 | 0.8037 | 0.744 |
| 0.8217 | 0.23 | 40000 | 0.8409 | 0.743 |
| 0.82 | 0.24 | 42000 | 0.8286 | 0.7385 |
| 0.8101 | 0.25 | 44000 | 0.8282 | 0.7413 |
| 0.8254 | 0.27 | 46000 | 0.8170 | 0.7414 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
994 | facebook/bart-large-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
pipeline_tag: zero-shot-classification
datasets:
- multi_nli
---
# bart-large-mnli
This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset.
Additional information about this model:
- The [bart-large](https://huggingface.co/facebook/bart-large) model page
- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
](https://arxiv.org/abs/1910.13461)
- [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart)
## NLI-based Zero Shot Text Classification
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities.
This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'labels': ['travel', 'dancing', 'cooking'],
# 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],
# 'sequence': 'one day I will see the world'}
```
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
#{'labels': ['travel', 'exploration', 'dancing', 'cooking'],
# 'scores': [0.9945111274719238,
# 0.9383890628814697,
# 0.0057061901316046715,
# 0.0018193122232332826],
# 'sequence': 'one day I will see the world'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
|
995 | fadhilarkan/distilbert-base-uncased-finetuned-cola-3 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Matthews Correlation: 1.0
Label 0 : "AIMX"
Label 1 : "OWNX"
Label 2 : "CONT"
Label 3 : "BASE"
Label 4 : "MISC"
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 192 | 0.0060 | 1.0 |
| No log | 2.0 | 384 | 0.0019 | 1.0 |
| 0.0826 | 3.0 | 576 | 0.0010 | 1.0 |
| 0.0826 | 4.0 | 768 | 0.0006 | 1.0 |
| 0.0826 | 5.0 | 960 | 0.0005 | 1.0 |
| 0.001 | 6.0 | 1152 | 0.0004 | 1.0 |
| 0.001 | 7.0 | 1344 | 0.0003 | 1.0 |
| 0.0005 | 8.0 | 1536 | 0.0003 | 1.0 |
| 0.0005 | 9.0 | 1728 | 0.0002 | 1.0 |
| 0.0005 | 10.0 | 1920 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
996 | fadhilarkan/distilbert-base-uncased-finetuned-cola-4 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0011
- Matthews Correlation: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 104 | 0.0243 | 1.0 |
| No log | 2.0 | 208 | 0.0074 | 1.0 |
| No log | 3.0 | 312 | 0.0041 | 1.0 |
| No log | 4.0 | 416 | 0.0028 | 1.0 |
| 0.0929 | 5.0 | 520 | 0.0021 | 1.0 |
| 0.0929 | 6.0 | 624 | 0.0016 | 1.0 |
| 0.0929 | 7.0 | 728 | 0.0014 | 1.0 |
| 0.0929 | 8.0 | 832 | 0.0012 | 1.0 |
| 0.0929 | 9.0 | 936 | 0.0012 | 1.0 |
| 0.0021 | 10.0 | 1040 | 0.0011 | 1.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
997 | fadhilarkan/distilbert-base-uncased-finetuned-cola | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
- Matthews Correlation: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 130 | 0.0166 | 1.0 |
| No log | 2.0 | 260 | 0.0054 | 1.0 |
| No log | 3.0 | 390 | 0.0029 | 1.0 |
| 0.0968 | 4.0 | 520 | 0.0019 | 1.0 |
| 0.0968 | 5.0 | 650 | 0.0014 | 1.0 |
| 0.0968 | 6.0 | 780 | 0.0011 | 1.0 |
| 0.0968 | 7.0 | 910 | 0.0010 | 1.0 |
| 0.0018 | 8.0 | 1040 | 0.0008 | 1.0 |
| 0.0018 | 9.0 | 1170 | 0.0008 | 1.0 |
| 0.0018 | 10.0 | 1300 | 0.0008 | 1.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
1,004 | fergusq/finbert-finnsentiment | [
"NEGATIVE",
"NEUTRAL",
"POSITIVE"
] | ---
language: fi
license: cc-by-4.0
---
# FinBERT fine-tuned with the FinnSentiment dataset
This is a FinBERT model fine-tuned with the [FinnSentiment dataset](https://arxiv.org/pdf/2012.02613.pdf). 90% of sentences were used for training and 10% for evaluation.
## Evaluation results
|Metric|Score|
|--|--|
|Accuracy|0.8639028475711893|
|F1-score|0.8643024701696561|
|Precision|0.8653866541244811|
|Recall|0.8639028475711893|
|Matthews|0.6764924917164834|

## License
FinBERT-FinnSentiment is licensed under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/deed.en) (same as FinBERT and the FinnSentiment dataset). |
1,005 | ffalcao/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264826040883781
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8108 | 1.0 | 250 | 0.3101 | 0.903 | 0.8995 |
| 0.2423 | 2.0 | 500 | 0.2108 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
1,006 | fgaim/tielectra-small-sentiment | [
"NEGATIVE",
"POSITIVE"
] | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: tielectra-small-sentiment
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: F1
type: f1
value: 0.8228962818003914
- name: Precision
type: precision
value: 0.8055555555555556
- name: Recall
type: recall
value: 0.841
- name: Accuracy
type: accuracy
value: 0.819
---
# Sentiment Analysis for Tigrinya with TiELECTRA small
This model is a fine-tuned version of [TiELECTRA small](https://huggingface.co/fgaim/tielectra-small) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tielectra-small-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
The model achieves the following results on the evaluation set:
- F1: 0.8229
- Precision: 0.8056
- Recall: 0.841
- Accuracy: 0.819
- Loss: 0.4299
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher= {WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
|
1,007 | fgaim/tiroberta-sentiment | [
"NEGATIVE",
"POSITIVE"
] | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
datasets:
- TLMD
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: tiroberta-sentiment
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.828
- name: F1
type: f1
value: 0.8476527900797165
- name: Precision
type: precision
value: 0.760731319554849
- name: Recall
type: recall
value: 0.957
---
# Sentiment Analysis for Tigrinya with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/roberta-base-tigrinya) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tiroberta-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
It achieves the following results on the evaluation set:
- F1: 0.8477
- Precision: 0.7607
- Recall: 0.957
- Accuracy: 0.828
- Loss: 0.6796
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
|
1,009 | finiteautomata/bertweet-base-emotion-analysis | [
"anger",
"disgust",
"fear",
"joy",
"others",
"sadness",
"surprise"
] | ---
language:
- en
tags:
- emotion-analysis
---
# Emotion Analysis in English
## bertweet-base-emotion-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with EmoEvent corpus for Emotion detection in English. Base model is [BerTweet](https://huggingface.co/vinai/bertweet-base).
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
and also the dataset related paper
```
@inproceedings{del2020emoevent,
title={EmoEvent: A multilingual emotion corpus based on different events},
author={del Arco, Flor Miriam Plaza and Strapparava, Carlo and Lopez, L Alfonso Urena and Mart{\'\i}n-Valdivia, M Teresa},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1492--1498},
year={2020}
}
```
Enjoy! 🤗
|
1,010 | finiteautomata/bertweet-base-sentiment-analysis | [
"NEG",
"NEU",
"POS"
] | ---
language:
- en
tags:
- sentiment-analysis
---
# Sentiment Analysis in English
## bertweet-sentiment-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is [BERTweet](https://github.com/VinAIResearch/BERTweet), a RoBERTa model trained on English tweets.
Uses `POS`, `NEG`, `NEU` labels.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Enjoy! 🤗
|
1,011 | finiteautomata/beto-emotion-analysis | [
"anger",
"disgust",
"fear",
"joy",
"others",
"sadness",
"surprise"
] | ---
language:
- es
tags:
- emotion-analysis
---
# Emotion Analysis in Spanish
## beto-emotion-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with TASS 2020 Task 2 corpus for Emotion detection in Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
and also the dataset related paper
```
@inproceedings{del2020emoevent,
title={EmoEvent: A multilingual emotion corpus based on different events},
author={del Arco, Flor Miriam Plaza and Strapparava, Carlo and Lopez, L Alfonso Urena and Mart{\'\i}n-Valdivia, M Teresa},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1492--1498},
year={2020}
}
```
Enjoy! 🤗
|
1,012 | finiteautomata/beto-headlines-sentiment-analysis | [
"NEG",
"NEU",
"POS"
] | # Targeted Sentiment Analysis in News Headlines
BERT classifier fine-tuned in a news headlines dataset annotated for target polarity.
(details to be published)
## Examples
Input is as follows
`Headline [SEP] Target`
where headline is the news title and target is an entity present in the headline.
Try
`Alberto Fernández: "El gobierno de Macri fue un desastre" [SEP] Macri` (should be NEG)
and
`Alberto Fernández: "El gobierno de Macri fue un desastre" [SEP] Alberto Fernández` (POS or NEU)
|
1,013 | finiteautomata/beto-sentiment-analysis | [
"NEG",
"NEU",
"POS"
] | ---
language:
- es
tags:
- sentiment-analysis
---
# Sentiment Analysis in Spanish
## beto-sentiment-analysis
**NOTE: this model will be removed soon -- use [pysentimiento/robertuito-sentiment-analysis](https://huggingface.co/pysentimiento/robertuito-sentiment-analysis) instead**
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/pysentimiento/pysentimiento/)
Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish.
Uses `POS`, `NEG`, `NEU` labels.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use this model in your work, please cite the following papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{canete2020spanish,
title={Spanish pre-trained bert model and evaluation data},
author={Ca{\~n}ete, Jos{\'e} and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and P{\'e}rez, Jorge},
journal={Pml4dc at iclr},
volume={2020},
number={2020},
pages={1--10},
year={2020}
}
```
Enjoy! 🤗
|
1,016 | flax-community/bert-swahili-news-classification | [
"afya",
"burudani",
"kimataifa",
"kitaifa",
"michezo",
"uchumi"
] | ---
language: sw
widget:
- text: "Idris ameandika kwenye ukurasa wake wa Instagram akimkumbusha Diamond kutekeleza ahadi yake kumpigia Zari magoti kumuomba msamaha kama alivyowahi kueleza awali.Idris ameandika;"
datasets:
- flax-community/swahili-safi
---
## Swahili News Classification with BERT
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
This [model](https://huggingface.co/flax-community/bert-base-uncased-swahili) was used as the base and fine-tuned for this task.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-swahili-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("flax-community/bert-swahili-news-classification")
```
```
Eval metrics (10% valid set): {'accuracy': 0.9114740008594757}
```
|
1,017 | flax-community/clip-vision-bert-vqa-ft-6k | [
"<unk>",
"0",
"000",
"1",
"1 4",
"1 foot",
"1 hour",
"1 in back",
"1 in front",
"1 in middle",
"1 inch",
"1 on left",
"1 on right",
"1 way",
"1 world",
"1 year",
"1.00",
"10",
"10 feet",
"10 inches",
"10 years",
"100",
"100 feet",
"100 year party ct",
"1000",
"101",... | # CLIP-Vision-BERT Multilingual VQA Model
Fine-tuned CLIP-Vision-BERT on translated [VQAv2](https://visualqa.org/challenge.html) image-text pairs using sequence classification objective. We translate the dataset to three other languages other than English: French, German, and Spanish using the [MarianMT Models](https://huggingface.co/transformers/model_doc/marian.html). This model is based on the VisualBERT which was introduced in
[this paper](https://arxiv.org/abs/1908.03557) and first released in
[this repository](https://github.com/uclanlp/visualbert). The output is 3129 class logits, the same classes as used by VisualBERT authors.
The initial weights are loaded from the Conceptual-12M 60k [checkpoints](https://huggingface.co/flax-community/clip-vision-bert-cc12m-60k).
We trained the CLIP-Vision-BERT VQA model during community week hosted by Huggingface 🤗 using JAX/Flax.
## Model description
CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from the CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes.
## Intended uses & limitations❗️
This model is fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since VQAv2 is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model.
### How to use❓
You can use this model directly on visual question answering. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below:
```python
>>> from torchvision.io import read_image
>>> import numpy as np
>>> import os
>>> from transformers import CLIPProcessor, BertTokenizerFast
>>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForSequenceClassification
>>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0])
>>> img = read_image(image_path)
>>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32')
ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
>>> clip_outputs = clip_processor(images=img)
>>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images.
>>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased')
>>> model = FlaxCLIPVisionBertForSequenceClassification.from_pretrained('flax-community/clip-vision-bert-vqa-ft-6k')
>>> text = "Are there teddy bears in the image?"
>>> tokens = tokenizer([text], return_tensors="np")
>>> pixel_values = np.concatenate([clip_outputs['pixel_values']])
>>> outputs = model(pixel_values=pixel_values, **tokens)
>>> preds = outputs.logits[0]
>>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores
>>> top_5_indices = sorted_indices[:5]
>>> top_5_tokens = list(map(model.config.id2label.get,top_5_indices))
>>> top_5_scores = preds[top_5_indices]
>>> print(dict(zip(top_5_tokens, top_5_scores)))
{'yes': 15.809224, 'no': 7.8785815, '<unk>': 4.622649, 'very': 4.511462, 'neither': 3.600822}
```
## Training data 🏋🏻♂️
The CLIP-Vision-BERT model was fine-tuned on the translated version of the VQAv2 dataset in four languages using Marian: English, French, German and Spanish. Hence, the dataset is four times the original English questions.
The dataset questions and image URLs/paths can be downloaded from [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa).
## Data Cleaning 🧹
Though the original dataset contains 443,757 train and 214,354 validation image-question pairs. We only use the `multiple_choice_answer`. The answers which are not present in the 3129 classes are mapped to the `<unk>` label.
**Splits**
We use the original train-val splits from the VQAv2 dataset. After translation, we get 1,775,028 train image-text pairs, and 857,416 validation image-text pairs.
## Training procedure 👨🏻💻
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]`.
### Fine-tuning
The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 6k steps with a per device batch size of 128 and a max sequence length of 128. The optimizer used is AdamW with a learning rate of 5e-5, learning rate warmup for 1600 steps, and linear decay of the learning rate after.
We tracked experiments using TensorBoard. Here is link to main dashboard: [CLIP Vision BERT VQAv2 Fine-tuning Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard)
#### **Fine-tuning Results 📊**
The model at this checkpoint reached **eval accuracy of 0.49** on our multilingual VQAv2 dataset.
## Team Members
- Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani)
- Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik)
## Acknowledgements
We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him.
This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project.
Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs.
<img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large> |
1,018 | flax-community/roberta-swahili-news-classification | [
"afya",
"burudani",
"kimataifa",
"kitaifa",
"michezo",
"uchumi"
] | ---
language: sw
widget:
- text: "Idris ameandika kwenye ukurasa wake wa Instagram akimkumbusha Diamond kutekeleza ahadi yake kumpigia Zari magoti kumuomba msamaha kama alivyowahi kueleza awali.Idris ameandika;"
datasets:
- flax-community/swahili-safi
---
## Swahili News Classification with RoBERTa
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
This [model](https://huggingface.co/flax-community/roberta-swahili) was used as the base and fine-tuned for this task.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flax-community/roberta-swahili-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("flax-community/roberta-swahili-news-classification")
```
```
Eval metrics: {'accuracy': 0.9153416415986249}
```
|
1,019 | fnlp/cpt-large | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- fill-mask
- text2text-generation
- fill-mask
- text-classification
- Summarization
- Chinese
- CPT
- BART
- BERT
- seq2seq
language: zh
---
# Chinese CPT-Large
### News
**12/30/2022**
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
The result compared to the previous checkpoints is as followings:
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
| Previous | | | | | |
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
| Updataed | | | | | |
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
## Model description
This is an implementation of CPT-Large. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project.
[**CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation**](https://arxiv.org/pdf/2109.05729.pdf)
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xipeng Qiu
**Github Link:** https://github.com/fastnlp/CPT
## Usage
```python
>>> from modeling_cpt import CPTForConditionalGeneration
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("fnlp/cpt-large")
>>> model = CPTForConditionalGeneration.from_pretrained("fnlp/cpt-large")
>>> input_ids = tokenizer.encode("北京是[MASK]的首都", return_tensors='pt')
>>> pred_ids = model.generate(input_ids, num_beams=4, max_length=20)
>>> print(tokenizer.convert_ids_to_tokens(pred_ids[0]))
['[SEP]', '[CLS]', '北', '京', '是', '中', '国', '的', '首', '都', '[SEP]']
```
**Note: Please use BertTokenizer for the model vocabulary. DO NOT use original BartTokenizer.**
## Citation
```bibtex
@article{shao2021cpt,
title={CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation},
author={Yunfan Shao and Zhichao Geng and Yitao Liu and Junqi Dai and Fei Yang and Li Zhe and Hujun Bao and Xipeng Qiu},
journal={arXiv preprint arXiv:2109.05729},
year={2021}
}
```
|
1,020 | frahman/distilbert-base-uncased-distilled-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9406451612903226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1002
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9039 | 1.0 | 318 | 0.5777 | 0.7335 |
| 0.4486 | 2.0 | 636 | 0.2860 | 0.8768 |
| 0.2528 | 3.0 | 954 | 0.1792 | 0.9210 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9274 |
| 0.1417 | 5.0 | 1590 | 0.1209 | 0.9329 |
| 0.1245 | 6.0 | 1908 | 0.1110 | 0.94 |
| 0.1135 | 7.0 | 2226 | 0.1061 | 0.9390 |
| 0.1074 | 8.0 | 2544 | 0.1026 | 0.94 |
| 0.1032 | 9.0 | 2862 | 0.1006 | 0.9410 |
| 0.1017 | 10.0 | 3180 | 0.1002 | 0.9406 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,021 | frahman/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9187096774193548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
- Accuracy: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 |
| 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 |
| 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 |
| 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1,022 | frahman/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9206660865871332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.9205
- F1: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8234 | 1.0 | 250 | 0.3185 | 0.9025 | 0.8992 |
| 0.2466 | 2.0 | 500 | 0.2202 | 0.9205 | 0.9207 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
1,024 | gagandeepkundi/latam-question-quality | [
"High Quality",
"Low Quality"
] | ---
tags: autonlp
language: es
widget:
- text: "I love AutoNLP 🤗"
datasets:
- gagandeepkundi/autonlp-data-text-classification
co2_eq_emissions: 20.790169878009916
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 19984005
- CO2 Emissions (in grams): 20.790169878009916
## Validation Metrics
- Loss: 0.06693269312381744
- Accuracy: 0.9789
- Precision: 0.9843244336569579
- Recall: 0.9733
- AUC: 0.99695552
- F1: 0.9787811745776348
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/gagandeepkundi/autonlp-text-classification-19984005
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gagandeepkundi/autonlp-text-classification-19984005", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gagandeepkundi/autonlp-text-classification-19984005", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,025 | ganeshkharad/gk-hinglish-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- hi-en
tags:
- sentiment
- multilingual
- hindi codemix
- hinglish
license: apache-2.0
datasets:
- sail
---
# Sentiment Classification for hinglish text: `gk-hinglish-sentiment`
## Model description
Trained small amount of reviews dataset
## Intended uses & limitations
I wanted something to work well with hinglish data as it is being used in India mostly.
The training data was not much as expected
#### How to use
```python
#sample code
from transformers import BertTokenizer, BertForSequenceClassification
tokenizerg = BertTokenizer.from_pretrained("/content/model")
modelg = BertForSequenceClassification.from_pretrained("/content/model")
text = "kuch bhi type karo hinglish mai"
encoded_input = tokenizerg(text, return_tensors='pt')
output = modelg(**encoded_input)
print(output)
#output contains 3 lables LABEL_0 = Negative ,LABEL_1 = Nuetral ,LABEL_2 = Positive
```
#### Limitations and bias
The data contains only hinglish codemixed text it and was very much limited may be I will Update this model if I can get good amount of data
## Training data
Training data contains labeled data for 3 labels
link to the pre-trained model card with description of the pre-training data.
I have Tuned below model
https://huggingface.co/rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment
### BibTeX entry and citation info
```@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
1,027 | gbade786/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233262687967644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8217 | 1.0 | 250 | 0.3137 | 0.903 | 0.8999 |
| 0.2484 | 2.0 | 500 | 0.2180 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1,028 | gchhablani/bert-base-cased-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5956649094312695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6747
- Matthews Correlation: 0.5957
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4921 | 1.0 | 535 | 0.5283 | 0.5068 |
| 0.2837 | 2.0 | 1070 | 0.5133 | 0.5521 |
| 0.1775 | 3.0 | 1605 | 0.6747 | 0.5957 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,029 | gchhablani/bert-base-cased-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8410292921074044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-mnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5721
- Accuracy: 0.8410
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5323 | 1.0 | 24544 | 0.4431 | 0.8302 |
| 0.3447 | 2.0 | 49088 | 0.4725 | 0.8353 |
| 0.2267 | 3.0 | 73632 | 0.5887 | 0.8368 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,030 | gchhablani/bert-base-cased-finetuned-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
- name: F1
type: f1
value: 0.9025641025641027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7132
- Accuracy: 0.8603
- F1: 0.9026
- Combined Score: 0.8814
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5981 | 1.0 | 230 | 0.4580 | 0.7892 | 0.8562 | 0.8227 |
| 0.3739 | 2.0 | 460 | 0.3806 | 0.8480 | 0.8942 | 0.8711 |
| 0.1991 | 3.0 | 690 | 0.4879 | 0.8529 | 0.8958 | 0.8744 |
| 0.1286 | 4.0 | 920 | 0.6342 | 0.8529 | 0.8986 | 0.8758 |
| 0.0812 | 5.0 | 1150 | 0.7132 | 0.8603 | 0.9026 | 0.8814 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,031 | gchhablani/bert-base-cased-finetuned-qnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9099395936298736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Accuracy: 0.9099
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.337 | 1.0 | 6547 | 0.9013 | 0.2448 |
| 0.1971 | 2.0 | 13094 | 0.9143 | 0.2839 |
| 0.1175 | 3.0 | 19641 | 0.9099 | 0.3986 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,032 | gchhablani/bert-base-cased-finetuned-qqp | [
"duplicate",
"not_duplicate"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9083848627256987
- name: F1
type: f1
value: 0.8767633750332712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qqp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3752
- Accuracy: 0.9084
- F1: 0.8768
- Combined Score: 0.8926
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.308 | 1.0 | 22741 | 0.2548 | 0.8925 | 0.8556 | 0.8740 |
| 0.201 | 2.0 | 45482 | 0.2881 | 0.9032 | 0.8698 | 0.8865 |
| 0.1416 | 3.0 | 68223 | 0.3752 | 0.9084 | 0.8768 | 0.8926 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,033 | gchhablani/bert-base-cased-finetuned-rte | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6714801444043321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-rte
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7260
- Accuracy: 0.6715
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6915 | 1.0 | 156 | 0.6491 | 0.6606 |
| 0.55 | 2.0 | 312 | 0.6737 | 0.6570 |
| 0.3955 | 3.0 | 468 | 0.7260 | 0.6715 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,034 | gchhablani/bert-base-cased-finetuned-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9231651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-sst2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3649
- Accuracy: 0.9232
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.233 | 1.0 | 4210 | 0.9174 | 0.2841 |
| 0.1261 | 2.0 | 8420 | 0.9278 | 0.3310 |
| 0.0768 | 3.0 | 12630 | 0.9232 | 0.3649 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,035 | gchhablani/bert-base-cased-finetuned-stsb | [
"LABEL_0"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: bert-base-cased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8897907271421561
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-stsb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4861
- Pearson: 0.8926
- Spearmanr: 0.8898
- Combined Score: 0.8912
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:|
| 1.1174 | 1.0 | 360 | 0.8816 | 0.5000 | 0.8832 | 0.8800 |
| 0.3835 | 2.0 | 720 | 0.8901 | 0.4672 | 0.8915 | 0.8888 |
| 0.2388 | 3.0 | 1080 | 0.8912 | 0.4861 | 0.8926 | 0.8898 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,036 | gchhablani/bert-base-cased-finetuned-wnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4647887323943662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.4648
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7299 | 1.0 | 40 | 0.6923 | 0.5634 |
| 0.6982 | 2.0 | 80 | 0.7027 | 0.3803 |
| 0.6972 | 3.0 | 120 | 0.7005 | 0.4507 |
| 0.6992 | 4.0 | 160 | 0.6977 | 0.5352 |
| 0.699 | 5.0 | 200 | 0.6996 | 0.4648 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,037 | gchhablani/bert-large-cased-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-large-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5957317644481708
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-cola
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8385
- Matthews Correlation: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5533 | 1.0 | 2138 | 0.7943 | 0.4439 |
| 0.5004 | 2.0 | 4276 | 0.7272 | 0.5678 |
| 0.2865 | 3.0 | 6414 | 0.8385 | 0.5957 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,038 | gchhablani/bert-large-cased-finetuned-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-large-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6441 | 1.0 | 917 | 0.6370 | 0.6838 | 0.8122 | 0.7480 |
| 0.6451 | 2.0 | 1834 | 0.6553 | 0.6838 | 0.8122 | 0.7480 |
| 0.6428 | 3.0 | 2751 | 0.6332 | 0.6838 | 0.8122 | 0.7480 |
| 0.6476 | 4.0 | 3668 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.6499 | 5.0 | 4585 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,039 | gchhablani/bert-large-cased-finetuned-rte | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6642599277978339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-rte
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5187
- Accuracy: 0.6643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6969 | 1.0 | 623 | 0.7039 | 0.5343 |
| 0.5903 | 2.0 | 1246 | 0.6461 | 0.7184 |
| 0.4557 | 3.0 | 1869 | 1.5187 | 0.6643 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,040 | gchhablani/bert-large-cased-finetuned-wnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-wnli
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7087
- Accuracy: 0.3521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.7114 | 1.0 | 159 | 0.5634 | 0.6923 |
| 0.7141 | 2.0 | 318 | 0.5634 | 0.6895 |
| 0.7063 | 3.0 | 477 | 0.5634 | 0.6930 |
| 0.712 | 4.0 | 636 | 0.4507 | 0.7077 |
| 0.7037 | 5.0 | 795 | 0.3521 | 0.7087 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,041 | gchhablani/fnet-base-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.35940659235571387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-cola
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5929
- Matthews Correlation: 0.3594
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5895 | 1.0 | 535 | 0.6146 | 0.1699 |
| 0.4656 | 2.0 | 1070 | 0.5667 | 0.3047 |
| 0.3329 | 3.0 | 1605 | 0.5929 | 0.3594 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,042 | gchhablani/fnet-base-finetuned-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7674938974776241
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-mnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6443
- Accuracy: 0.7675
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7143 | 1.0 | 24544 | 0.6169 | 0.7504 |
| 0.5407 | 2.0 | 49088 | 0.6218 | 0.7627 |
| 0.4178 | 3.0 | 73632 | 0.6564 | 0.7658 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,043 | gchhablani/fnet-base-finetuned-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7720588235294118
- name: F1
type: f1
value: 0.8502415458937198
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-mrpc
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9653
- Accuracy: 0.7721
- F1: 0.8502
- Combined Score: 0.8112
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.544 | 1.0 | 230 | 0.5272 | 0.7328 | 0.8300 | 0.7814 |
| 0.4034 | 2.0 | 460 | 0.6211 | 0.7255 | 0.8298 | 0.7776 |
| 0.2602 | 3.0 | 690 | 0.9110 | 0.7230 | 0.8306 | 0.7768 |
| 0.1688 | 4.0 | 920 | 0.8640 | 0.7696 | 0.8489 | 0.8092 |
| 0.0913 | 5.0 | 1150 | 0.9653 | 0.7721 | 0.8502 | 0.8112 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,044 | gchhablani/fnet-base-finetuned-qnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8438586857038257
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-qnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4746
- Accuracy: 0.8439
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name qnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-qnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4597 | 1.0 | 6547 | 0.3713 | 0.8411 |
| 0.3252 | 2.0 | 13094 | 0.3781 | 0.8420 |
| 0.2243 | 3.0 | 19641 | 0.4746 | 0.8439 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,045 | gchhablani/fnet-base-finetuned-qqp | [
"duplicate",
"not_duplicate"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-base-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8847390551570616
- name: F1
type: f1
value: 0.8466197090382463
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-qqp
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3686
- Accuracy: 0.8847
- F1: 0.8466
- Combined Score: 0.8657
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3484 | 1.0 | 22741 | 0.3014 | 0.8676 | 0.8297 | 0.8487 |
| 0.2387 | 2.0 | 45482 | 0.3011 | 0.8801 | 0.8429 | 0.8615 |
| 0.1739 | 3.0 | 68223 | 0.3686 | 0.8847 | 0.8466 | 0.8657 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,046 | gchhablani/fnet-base-finetuned-rte | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.628158844765343
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-rte
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6978
- Accuracy: 0.6282
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6829 | 1.0 | 156 | 0.6657 | 0.5704 |
| 0.6174 | 2.0 | 312 | 0.6784 | 0.6101 |
| 0.5141 | 3.0 | 468 | 0.6978 | 0.6282 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,047 | gchhablani/fnet-base-finetuned-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8944954128440367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-sst2
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4674
- Accuracy: 0.8945
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.2956 | 1.0 | 4210 | 0.8819 | 0.3128 |
| 0.1746 | 2.0 | 8420 | 0.8979 | 0.3850 |
| 0.1204 | 3.0 | 12630 | 0.8945 | 0.4674 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,048 | gchhablani/fnet-base-finetuned-stsb | [
"LABEL_0"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: fnet-base-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8219397497728022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-stsb
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7894
- Pearson: 0.8256
- Spearmanr: 0.8219
- Combined Score: 0.8238
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:|
| 1.5473 | 1.0 | 360 | 0.8120 | 0.7751 | 0.8115 | 0.8125 |
| 0.6954 | 2.0 | 720 | 0.8145 | 0.8717 | 0.8160 | 0.8130 |
| 0.4828 | 3.0 | 1080 | 0.8238 | 0.7894 | 0.8256 | 0.8219 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,049 | gchhablani/fnet-base-finetuned-wnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5492957746478874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-wnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6887
- Accuracy: 0.5493
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7052 | 1.0 | 40 | 0.6902 | 0.5634 |
| 0.6957 | 2.0 | 80 | 0.7013 | 0.4366 |
| 0.6898 | 3.0 | 120 | 0.6898 | 0.5352 |
| 0.6958 | 4.0 | 160 | 0.6874 | 0.5634 |
| 0.6982 | 5.0 | 200 | 0.6887 | 0.5493 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,050 | gchhablani/fnet-large-finetuned-cola-copy | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,051 | gchhablani/fnet-large-finetuned-cola-copy2 | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy2
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6192 | 1.0 | 2138 | 0.6443 | 0.0 |
| 0.6177 | 2.0 | 4276 | 0.6296 | 0.0 |
| 0.6128 | 3.0 | 6414 | 0.6173 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,052 | gchhablani/fnet-large-finetuned-cola-copy3 | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy3
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6554
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6408 | 1.0 | 2138 | 0.7329 | 0.0 |
| 0.6589 | 2.0 | 4276 | 0.6311 | 0.0 |
| 0.6467 | 3.0 | 6414 | 0.6554 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,053 | gchhablani/fnet-large-finetuned-cola-copy4 | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy4
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6345 | 1.0 | 2138 | 0.6611 | 0.0 |
| 0.6359 | 2.0 | 4276 | 0.6840 | 0.0 |
| 0.6331 | 3.0 | 6414 | 0.6500 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,054 | gchhablani/fnet-large-finetuned-cola | [
"acceptable",
"unacceptable"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,055 | gchhablani/fnet-large-finetuned-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8259803921568627
- name: F1
type: f1
value: 0.8798646362098139
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-mrpc
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0872
- Accuracy: 0.8260
- F1: 0.8799
- Combined Score: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5656 | 1.0 | 917 | 0.6999 | 0.7843 | 0.8581 | 0.8212 |
| 0.3874 | 2.0 | 1834 | 0.7280 | 0.8088 | 0.8691 | 0.8390 |
| 0.1627 | 3.0 | 2751 | 1.1274 | 0.8162 | 0.8780 | 0.8471 |
| 0.0751 | 4.0 | 3668 | 1.0289 | 0.8333 | 0.8870 | 0.8602 |
| 0.0339 | 5.0 | 4585 | 1.0872 | 0.8260 | 0.8799 | 0.8529 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,056 | gchhablani/fnet-large-finetuned-qqp | [
"duplicate",
"not_duplicate"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8943111550828593
- name: F1
type: f1
value: 0.8556565212985171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-qqp
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5515
- Accuracy: 0.8943
- F1: 0.8557
- Combined Score: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.4574 | 1.0 | 90962 | 0.4946 | 0.8694 | 0.8297 | 0.8496 |
| 0.3387 | 2.0 | 181924 | 0.4745 | 0.8874 | 0.8437 | 0.8655 |
| 0.2029 | 3.0 | 272886 | 0.5515 | 0.8943 | 0.8557 | 0.8750 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,057 | gchhablani/fnet-large-finetuned-rte | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6425992779783394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-rte
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.6426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7105 | 1.0 | 623 | 0.6887 | 0.5740 |
| 0.6714 | 2.0 | 1246 | 0.6742 | 0.6209 |
| 0.509 | 3.0 | 1869 | 0.7528 | 0.6426 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,058 | gchhablani/fnet-large-finetuned-sst2 | [
"negative",
"positive"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9048165137614679
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-sst2
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5240
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.394 | 1.0 | 16838 | 0.3896 | 0.8968 |
| 0.2076 | 2.0 | 33676 | 0.5100 | 0.8956 |
| 0.1148 | 3.0 | 50514 | 0.5240 | 0.9048 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,059 | gchhablani/fnet-large-finetuned-stsb | [
"LABEL_0"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: fnet-large-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8532669137129205
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-stsb
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6250
- Pearson: 0.8554
- Spearmanr: 0.8533
- Combined Score: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.0727 | 1.0 | 1438 | 0.7718 | 0.8187 | 0.8240 | 0.8214 |
| 0.4619 | 2.0 | 2876 | 0.7704 | 0.8472 | 0.8500 | 0.8486 |
| 0.2401 | 3.0 | 4314 | 0.6250 | 0.8554 | 0.8533 | 0.8543 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,060 | gchhablani/fnet-large-finetuned-wnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.38028169014084506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-wnli
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 |
| 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 |
| 0.706 | 3.0 | 477 | 0.6875 | 0.5634 |
| 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 |
| 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1,070 | gurkan08/bert-turkish-text-classification | [
"ekonomi",
"spor",
"saglik",
"kultur_sanat",
"bilim_teknoloji",
"egitim"
] | ---
language: tr
---
# Turkish News Text Classification
Turkish text classification model obtained by fine-tuning the Turkish bert model (dbmdz/bert-base-turkish-cased)
# Dataset
Dataset consists of 11 classes were obtained from https://www.trthaber.com/. The model was created using the most distinctive 6 classes.
Dataset can be accessed at https://github.com/gurkan08/datasets/tree/master/trt_11_category.
label_dict = {
'LABEL_0': 'ekonomi',
'LABEL_1': 'spor',
'LABEL_2': 'saglik',
'LABEL_3': 'kultur_sanat',
'LABEL_4': 'bilim_teknoloji',
'LABEL_5': 'egitim'
}
70% of the data were used for training and 30% for testing.
train f1-weighted score = %97
test f1-weighted score = %94
# Usage
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("gurkan08/bert-turkish-text-classification")
model = AutoModelForSequenceClassification.from_pretrained("gurkan08/bert-turkish-text-classification")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
text = ["Süper Lig'in 6. haftasında Sivasspor ile Çaykur Rizespor karşı karşıya geldi...",
"Son 24 saatte 69 kişi Kovid-19 nedeniyle yaşamını yitirdi, 1573 kişi iyileşti"]
out = nlp(text)
label_dict = {
'LABEL_0': 'ekonomi',
'LABEL_1': 'spor',
'LABEL_2': 'saglik',
'LABEL_3': 'kultur_sanat',
'LABEL_4': 'bilim_teknoloji',
'LABEL_5': 'egitim'
}
results = []
for result in out:
result['label'] = label_dict[result['label']]
results.append(result)
print(results)
# > [{'label': 'spor', 'score': 0.9992026090621948}, {'label': 'saglik', 'score': 0.9972177147865295}]
|
1,075 | hd10/semeval2020_task11_tc | [
"Appeal_to_Authority",
"Appeal_to_fear-prejudice",
"Bandwagon,Reductio_ad_hitlerum",
"Black-and-White_Fallacy",
"Causal_Oversimplification",
"Doubt",
"Exaggeration,Minimisation",
"Flag-Waving",
"Loaded_Language",
"Name_Calling,Labeling",
"Repetition",
"Slogans",
"Thought-terminating_Cliches"... | Technique Classification for https://propaganda.qcri.org/ptc/index.html |
1,076 | hectorcotelo/autonlp-spanish_songs-202661 | [
"average",
"bad",
"good",
"hit",
"worst"
] | ---
tags: autonlp
language: es
widget:
- text: "Y si me tomo una cerveza
Vuelves a mi cabeza
Y empiezo a recordarte
Es que me gusta cómo besas
Con tu delicadeza
Puede ser que
Tú y yo, somos el uno para el otro
Que no dejo de pensarte
Quise olvidarte y tomé un poco
Y resultó extrañarte, yeah"
datasets:
- hectorcotelo/autonlp-data-spanish_songs
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 202661
## Validation Metrics
- Loss: 1.5369086265563965
- Accuracy: 0.30762817840766987
- Macro F1: 0.28034259092597485
- Micro F1: 0.30762817840766987
- Weighted F1: 0.28072818168048186
- Macro Precision: 0.3113843896292072
- Micro Precision: 0.30762817840766987
- Weighted Precision: 0.3128459166476807
- Macro Recall: 0.3071652685939504
- Micro Recall: 0.30762817840766987
- Weighted Recall: 0.30762817840766987
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/hectorcotelo/autonlp-spanish_songs-202661
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("hectorcotelo/autonlp-spanish_songs-202661", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hectorcotelo/autonlp-spanish_songs-202661", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,077 | hemekci/off_detection_turkish | [
"not offensive",
"offensive"
] | ---
language: tr
widget:
- text: "sevelim sevilelim bu dunya kimseye kalmaz"
---
## Offensive Language Detection Model in Turkish
- uses Bert and pytorch
- fine tuned with Twitter data.
- UTF-8 configuration is done
### Training Data
Number of training sentences: 31,277
**Example Tweets**
- 19823 Daliaan yifng cok erken attin be... 1.38 ...| NOT|
- 30525 @USER Bak biri kollarımda uyuyup gitmem diyor..|NOT|
- 26468 Helal olsun be :) Norveçten sabaha karşı geldi aq... | OFF|
- 14105 @USER Sunu cekecek ve güzel oldugunu söylecek aptal... |OFF|
- 4958 Ya seni yerim ben şapşal şey 🤗 | NOT|
- 12966 Herkesin akıllı geçindiği bir sosyal medyamız var ... |NOT|
- 5788 Maçın özetlerini izleyenler futbolcular gidiyo... |NOT|
|OFFENSIVE |RESULT |
|--|--|
|NOT | 25231|
|OFF|6046|
dtype: int64
### Validation
|epoch |Training Loss | Valid. Loss | Valid.Accuracy | Training Time | Validation Time |
|--|--|--|--|--|--|
|1 | 0.31| 0.28| 0.89| 0:07:14 | 0:00:13
|2 | 0.18| 0.29| 0.90| 0:07:18 | 0:00:13
|3 | 0.08| 0.40| 0.89| 0:07:16 | 0:00:13
|4 | 0.04| 0.59| 0.89| 0:07:13 | 0:00:13
**Matthews Corr. Coef. (-1 : +1):**
Total MCC Score: 0.633
|
1,081 | huggingface/CodeBERTa-language-id | [
"go",
"java",
"javascript",
"php",
"python",
"ruby"
] | ---
language: code
thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
datasets:
- code_search_net
---
# CodeBERTa-language-id: The World’s fanciest programming language identification algo 🤯
To demonstrate the usefulness of our CodeBERTa pretrained model on downstream tasks beyond language modeling, we fine-tune the [`CodeBERTa-small-v1`](https://huggingface.co/huggingface/CodeBERTa-small-v1) checkpoint on the task of classifying a sample of code into the programming language it's written in (*programming language identification*).
We add a sequence classification head on top of the model.
On the evaluation dataset, we attain an eval accuracy and F1 > 0.999 which is not surprising given that the task of language identification is relatively easy (see an intuition why, below).
## Quick start: using the raw model
```python
CODEBERTA_LANGUAGE_ID = "huggingface/CodeBERTa-language-id"
tokenizer = RobertaTokenizer.from_pretrained(CODEBERTA_LANGUAGE_ID)
model = RobertaForSequenceClassification.from_pretrained(CODEBERTA_LANGUAGE_ID)
input_ids = tokenizer.encode(CODE_TO_IDENTIFY)
logits = model(input_ids)[0]
language_idx = logits.argmax() # index for the resulting label
```
## Quick start: using Pipelines 💪
```python
from transformers import TextClassificationPipeline
pipeline = TextClassificationPipeline(
model=RobertaForSequenceClassification.from_pretrained(CODEBERTA_LANGUAGE_ID),
tokenizer=RobertaTokenizer.from_pretrained(CODEBERTA_LANGUAGE_ID)
)
pipeline(CODE_TO_IDENTIFY)
```
Let's start with something very easy:
```python
pipeline("""
def f(x):
return x**2
""")
# [{'label': 'python', 'score': 0.9999965}]
```
Now let's probe shorter code samples:
```python
pipeline("const foo = 'bar'")
# [{'label': 'javascript', 'score': 0.9977546}]
```
What if I remove the `const` token from the assignment?
```python
pipeline("foo = 'bar'")
# [{'label': 'javascript', 'score': 0.7176245}]
```
For some reason, this is still statistically detected as JS code, even though it's also valid Python code. However, if we slightly tweak it:
```python
pipeline("foo = u'bar'")
# [{'label': 'python', 'score': 0.7638422}]
```
This is now detected as Python (Notice the `u` string modifier).
Okay, enough with the JS and Python domination already! Let's try fancier languages:
```python
pipeline("echo $FOO")
# [{'label': 'php', 'score': 0.9995257}]
```
(Yes, I used the word "fancy" to describe PHP 😅)
```python
pipeline("outcome := rand.Intn(6) + 1")
# [{'label': 'go', 'score': 0.9936151}]
```
Why is the problem of language identification so easy (with the correct toolkit)? Because code's syntax is rigid, and simple tokens such as `:=` (the assignment operator in Go) are perfect predictors of the underlying language:
```python
pipeline(":=")
# [{'label': 'go', 'score': 0.9998052}]
```
By the way, because we trained our own custom tokenizer on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset, and it handles streams of bytes in a very generic way, syntactic constructs such `:=` are represented by a single token:
```python
self.tokenizer.encode(" :=", add_special_tokens=False)
# [521]
```
<br>
## Fine-tuning code
<details>
```python
import gzip
import json
import logging
import os
from pathlib import Path
from typing import Dict, List, Tuple
import numpy as np
import torch
from sklearn.metrics import f1_score
from tokenizers.implementations.byte_level_bpe import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.dataset import Dataset
from torch.utils.tensorboard.writer import SummaryWriter
from tqdm import tqdm, trange
from transformers import RobertaForSequenceClassification
from transformers.data.metrics import acc_and_f1, simple_accuracy
logging.basicConfig(level=logging.INFO)
CODEBERTA_PRETRAINED = "huggingface/CodeBERTa-small-v1"
LANGUAGES = [
"go",
"java",
"javascript",
"php",
"python",
"ruby",
]
FILES_PER_LANGUAGE = 1
EVALUATE = True
# Set up tokenizer
tokenizer = ByteLevelBPETokenizer("./pretrained/vocab.json", "./pretrained/merges.txt",)
tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=512)
# Set up Tensorboard
tb_writer = SummaryWriter()
class CodeSearchNetDataset(Dataset):
examples: List[Tuple[List[int], int]]
def __init__(self, split: str = "train"):
"""
train | valid | test
"""
self.examples = []
src_files = []
for language in LANGUAGES:
src_files += list(
Path("../CodeSearchNet/resources/data/").glob(f"{language}/final/jsonl/{split}/*.jsonl.gz")
)[:FILES_PER_LANGUAGE]
for src_file in src_files:
label = src_file.parents[3].name
label_idx = LANGUAGES.index(label)
print("🔥", src_file, label)
lines = []
fh = gzip.open(src_file, mode="rt", encoding="utf-8")
for line in fh:
o = json.loads(line)
lines.append(o["code"])
examples = [(x.ids, label_idx) for x in tokenizer.encode_batch(lines)]
self.examples += examples
print("🔥🔥")
def __len__(self):
return len(self.examples)
def __getitem__(self, i):
# We’ll pad at the batch level.
return self.examples[i]
model = RobertaForSequenceClassification.from_pretrained(CODEBERTA_PRETRAINED, num_labels=len(LANGUAGES))
train_dataset = CodeSearchNetDataset(split="train")
eval_dataset = CodeSearchNetDataset(split="test")
def collate(examples):
input_ids = pad_sequence([torch.tensor(x[0]) for x in examples], batch_first=True, padding_value=1)
labels = torch.tensor([x[1] for x in examples])
# ^^ uncessary .unsqueeze(-1)
return input_ids, labels
train_dataloader = DataLoader(train_dataset, batch_size=256, shuffle=True, collate_fn=collate)
batch = next(iter(train_dataloader))
model.to("cuda")
model.train()
for param in model.roberta.parameters():
param.requires_grad = False
## ^^ Only train final layer.
print(f"num params:", model.num_parameters())
print(f"num trainable params:", model.num_parameters(only_trainable=True))
def evaluate():
eval_loss = 0.0
nb_eval_steps = 0
preds = np.empty((0), dtype=np.int64)
out_label_ids = np.empty((0), dtype=np.int64)
model.eval()
eval_dataloader = DataLoader(eval_dataset, batch_size=512, collate_fn=collate)
for step, (input_ids, labels) in enumerate(tqdm(eval_dataloader, desc="Eval")):
with torch.no_grad():
outputs = model(input_ids=input_ids.to("cuda"), labels=labels.to("cuda"))
loss = outputs[0]
logits = outputs[1]
eval_loss += loss.mean().item()
nb_eval_steps += 1
preds = np.append(preds, logits.argmax(dim=1).detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, labels.detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
acc = simple_accuracy(preds, out_label_ids)
f1 = f1_score(y_true=out_label_ids, y_pred=preds, average="macro")
print("=== Eval: loss ===", eval_loss)
print("=== Eval: acc. ===", acc)
print("=== Eval: f1 ===", f1)
# print(acc_and_f1(preds, out_label_ids))
tb_writer.add_scalars("eval", {"loss": eval_loss, "acc": acc, "f1": f1}, global_step)
### Training loop
global_step = 0
train_iterator = trange(0, 4, desc="Epoch")
optimizer = torch.optim.AdamW(model.parameters())
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration")
for step, (input_ids, labels) in enumerate(epoch_iterator):
optimizer.zero_grad()
outputs = model(input_ids=input_ids.to("cuda"), labels=labels.to("cuda"))
loss = outputs[0]
loss.backward()
tb_writer.add_scalar("training_loss", loss.item(), global_step)
optimizer.step()
global_step += 1
if EVALUATE and global_step % 50 == 0:
evaluate()
model.train()
evaluate()
os.makedirs("./models/CodeBERT-language-id", exist_ok=True)
model.save_pretrained("./models/CodeBERT-language-id")
```
</details>
<br>
## CodeSearchNet citation
<details>
```bibtex
@article{husain_codesearchnet_2019,
title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}},
shorttitle = {{CodeSearchNet} {Challenge}},
url = {http://arxiv.org/abs/1909.09436},
urldate = {2020-03-12},
journal = {arXiv:1909.09436 [cs, stat]},
author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
month = sep,
year = {2019},
note = {arXiv: 1909.09436},
}
```
</details>
|
1,083 | ibraheemmoosa/xlmindic-base-multiscript-soham | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
language:
- as
- bn
- gu
- hi
- mr
- ne
- or
- pa
- si
- sa
- bpy
- bh
- gom
- mai
license: apache-2.0
datasets:
- oscar
tags:
- multilingual
- albert
- fill-mask
- xlmindic
- nlp
- indoaryan
- indicnlp
- iso15919
- text-classification
widget:
- text : 'চীনের মধ্যাঞ্চলে আরও একটি শহরের বাসিন্দারা আবার ঘরবন্দী হয়ে পড়েছেন। আজ মঙ্গলবার নতুন করে লকডাউন–সংক্রান্ত বিধিনিষেধ জারি হওয়ার পর ঘরে আটকা পড়েছেন তাঁরা। করোনার অতি সংক্রামক নতুন ধরন অমিক্রনের বিস্তার ঠেকাতে এমন পদক্ষেপ নিয়েছে কর্তৃপক্ষ। খবর বার্তা সংস্থা এএফপির।'
co2_eq_emissions:
emissions: "0.21 in grams of CO2"
source: "calculated using this webstie https://mlco2.github.io/impact/#compute"
training_type: "fine-tuning"
geographical_location: "NA"
hardware_used: "P100 for about 1.5 hours"
---
# XLMIndic Base Multiscript
This model is finetuned from [this model](https://huggingface.co/ibraheemmoosa/xlmindic-base-multiscript) on Soham Bangla News Classification task which is part of the IndicGLUE benchmark.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark.
## Training procedure
### Preprocessing
The texts are tokenized using SentencePiece and a vocabulary size of 50,000.
### Training
The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.
## Evaluation results
See results specific to Soham in the following table.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript | XLMIndic-Base-Multiscript (This Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-multiscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> unmasker(text)
[{'score': 0.34163928031921387,
'token': 5399,
'token_str': 'কবি',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.30519795417785645,
'token': 33436,
'token_str': 'people',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি people, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.29130080342292786,
'token': 30476,
'token_str': 'সাহিত্যিক',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি সাহিত্যিক, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.031051287427544594,
'token': 6139,
'token_str': 'লেখক',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি লেখক, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'},
{'score': 0.002705035964027047,
'token': 38443,
'token_str': 'শিল্পীরা',
'sequence': 'রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি শিল্পীরা, পন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com)
- Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu)
- Ashfia Binte Habib
## BibTeX entry and citation info
```bibtex
@article{Moosa2022DoesTH,
title={Does Transliteration Help Multilingual Language Modeling?},
author={Ibraheem Muhammad Moosa and Mahmuda Akhter and Ashfia Binte Habib},
journal={ArXiv},
year={2022},
volume={abs/2201.12501}
}
``` |
1,084 | ibraheemmoosa/xlmindic-base-uniscript-soham | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
language:
- as
- bn
- gu
- hi
- mr
- ne
- or
- pa
- si
- sa
- bpy
- mai
- bh
- gom
license: apache-2.0
datasets:
- oscar
tags:
- multilingual
- albert
- xlmindic
- nlp
- indoaryan
- indicnlp
- iso15919
- transliteration
- text-classification
widget:
- text : 'cīnēra madhyāñcalē āraō ēkaṭi śaharēra bāsindārā ābāra gharabandī haẏē paṛēchēna. āja maṅgalabāra natuna karē lakaḍāuna–saṁkrānta bidhiniṣēdha jāri haōẏāra para gharē āṭakā paṛēchēna tām̐rā. karōnāra ati saṁkrāmaka natuna dharana amikranēra bistāra ṭhēkātē ēmana padakṣēpa niẏēchē kartr̥pakṣa. khabara bārtā saṁsthā ēēphapira.'
co2_eq_emissions:
emissions: "0.21 in grams of CO2"
source: "calculated using this webstie https://mlco2.github.io/impact/#compute"
training_type: "fine-tuning"
geographical_location: "NA"
hardware_used: "P100 for about 1.5 hours"
---
# XLMIndic Base Uniscript
This model is finetuned from [this model](https://huggingface.co/ibraheemmoosa/xlmindic-base-uniscript) on Soham Bangla News Classification task which is part of the IndicGLUE benchmark. **Before pretraining this model we transliterate the text to [ISO-15919](https://en.wikipedia.org/wiki/ISO_15919) format using the [Aksharamukha](https://pypi.org/project/aksharamukha/)
library.** A demo of Aksharamukha library is hosted [here](https://aksharamukha.appspot.com/converter)
where you can transliterate your text and use it on our model on the inference widget.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was fine-tuned on Soham dataset that is part of the IndicGLUE benchmark.
## Transliteration
*The unique component of this model is that it takes in ISO-15919 transliterated text.*
The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation.
For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script.
An example of ISO-15919 transliteration for a piece of **Bangla** text is the following:
**Original:** "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।"
**Transliterated:** 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.'
Another example for a piece of **Hindi** text is the following:
**Original:** "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
**Transliterated:** "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
## Training procedure
### Preprocessing
The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000.
### Training
The model was trained for 8 epochs with a batch size of 16 and a learning rate of *2e-5*.
## Evaluation results
See results specific to Soham in the following table.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript (This Model) | XLMIndic-Base-Multiscript (Ablation Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
To use this model you will need to first install the [Aksharamukha](https://pypi.org/project/aksharamukha/) library.
```bash
pip install aksharamukha
```
Using this library you can transliterate any text wriiten in Indic scripts in the following way:
```python
>>> from aksharamukha import transliterate
>>> text = "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
>>> transliterated_text = transliterate.process('autodetect', 'ISO', text)
>>> transliterated_text
"cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
```
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> from aksharamukha import transliterate
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-uniscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> transliterated_text = transliterate.process('Bengali', 'ISO', text)
>>> transliterated_text
'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama [MASK] puraskāra lābha karēna.'
>>> unmasker(transliterated_text)
[{'score': 0.39705055952072144,
'token': 1500,
'token_str': 'abhinētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli abhinētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.20499080419540405,
'token': 3585,
'token_str': 'kabi',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.1314290314912796,
'token': 15402,
'token_str': 'rājanētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli rājanētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.060830358415842056,
'token': 3212,
'token_str': 'kalākāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kalākāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.035522934049367905,
'token': 11586,
'token_str': 'sāhityakāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli sāhityakāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com)
- Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu)
- Ashfia Binte Habib
## BibTeX entry and citation info
Coming soon!
|
1,085 | idjotherwise/autonlp-reading_prediction-172506 | [
"target"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- idjotherwise/autonlp-data-reading_prediction
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 172506
## Validation Metrics
- Loss: 0.03257797285914421
- MSE: 0.03257797285914421
- MAE: 0.14246532320976257
- R2: 0.9693824457290849
- RMSE: 0.18049369752407074
- Explained Variance: 0.9699198007583618
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idjotherwise/autonlp-reading_prediction-172506
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
tokenizer = AutoTokenizer.from_pretrained("idjotherwise/autonlp-reading_prediction-172506")
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,086 | idrimadrid/autonlp-creator_classifications-4021083 | [
"ABC Studios",
"Blizzard Entertainment",
"Capcom",
"Cartoon Network",
"Clive Barker",
"DC Comics",
"Dark Horse Comics",
"Disney",
"Dreamworks",
"George Lucas",
"George R. R. Martin",
"Hanna-Barbera",
"HarperCollins",
"Hasbro",
"IDW Publishing",
"Ian Fleming",
"Icon Comics",
"Image ... | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- idrimadrid/autonlp-data-creator_classifications
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 4021083
## Validation Metrics
- Loss: 0.6848716735839844
- Accuracy: 0.8825910931174089
- Macro F1: 0.41301646762109634
- Micro F1: 0.8825910931174088
- Weighted F1: 0.863740586166105
- Macro Precision: 0.4129337301330573
- Micro Precision: 0.8825910931174089
- Weighted Precision: 0.8531335941587811
- Macro Recall: 0.44466614072309585
- Micro Recall: 0.8825910931174089
- Weighted Recall: 0.8825910931174089
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/idrimadrid/autonlp-creator_classifications-4021083
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("idrimadrid/autonlp-creator_classifications-4021083", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("idrimadrid/autonlp-creator_classifications-4021083", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,088 | doyoungkim/bert-base-uncased-finetuned-sst2 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: bert-base-uncased-finetuned-sst2
results:
- dataset:
name: glue
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.926605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1666 | 1.0 | 2105 | 0.2403 | 0.9232 |
| 0.1122 | 2.0 | 4210 | 0.2716 | 0.9266 |
| 0.0852 | 3.0 | 6315 | 0.3150 | 0.9232 |
| 0.056 | 4.0 | 8420 | 0.3209 | 0.9163 |
| 0.0344 | 5.0 | 10525 | 0.3740 | 0.9243 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
1,091 | imzachjohnson/autonlp-spinner-check-16492731 | [
"0",
"1"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- imzachjohnson/autonlp-data-spinner-check
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 16492731
## Validation Metrics
- Loss: 0.21610039472579956
- Accuracy: 0.9155366722657816
- Precision: 0.9530714194995978
- Recall: 0.944871149164778
- AUC: 0.9553238723676906
- F1: 0.9489535692456846
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/imzachjohnson/autonlp-spinner-check-16492731
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
1,092 | inovex/multi2convai-corona-de-bert | [
"corona.traffic",
"corona.supplies",
"corona.quarantine",
"corona.masks",
"corona.illness",
"corona.package",
"corona.vaccine",
"corona.rumors",
"corona.risk",
"corona.course",
"corona.symptoms",
"corona.patients",
"corona.deathRate",
"corona.infect",
"corona.protect",
"corona.definiti... | ---
tags:
- text-classification
- pytorch
- transformers
widget:
- text: "Muss ich eine Maske tragen?"
license: mit
language: de
---
# Multi2ConvAI-Corona: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,093 | inovex/multi2convai-corona-en-bert | [
"corona.traffic",
"corona.supplies",
"corona.quarantine",
"corona.masks",
"corona.illness",
"corona.package",
"corona.vaccine",
"corona.rumors",
"corona.risk",
"corona.course",
"corona.symptoms",
"corona.patients",
"corona.deathRate",
"corona.infect",
"corona.protect",
"corona.definiti... | ---
tags:
- text-classification
- pytorch
- transformers
widget:
- text: "Do I need to wear a mask?"
license: mit
language: en
---
# Multi2ConvAI-Corona: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,094 | inovex/multi2convai-corona-fr-bert | [
"corona.traffic",
"corona.supplies",
"corona.quarantine",
"corona.masks",
"corona.illness",
"corona.package",
"corona.vaccine",
"corona.rumors",
"corona.risk",
"corona.course",
"corona.symptoms",
"corona.patients",
"corona.deathRate",
"corona.infect",
"corona.protect",
"corona.definiti... | ---
tags:
- text-classification
widget:
- text: "Dois-je porter un masque?"
license: mit
language: fr
---
# Multi2ConvAI-Corona: finetuned Bert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2conv.ai/en/blog/use-cases), [de](https://multi2conv.ai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-fr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-fr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,095 | inovex/multi2convai-corona-it-bert | [
"corona.traffic",
"corona.supplies",
"corona.quarantine",
"corona.masks",
"corona.illness",
"corona.package",
"corona.vaccine",
"corona.rumors",
"corona.risk",
"corona.course",
"corona.symptoms",
"corona.patients",
"corona.deathRate",
"corona.infect",
"corona.protect",
"corona.definiti... | ---
tags:
- text-classification
widget:
- text: "Devo indossare una maschera?"
license: mit
language: it
---
# Multi2ConvAI-Corona: finetuned Bert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-it-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-it-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,096 | inovex/multi2convai-logistics-de-bert | [
"details.address",
"tour.postcode.select",
"tour.finish",
"details.safeplace",
"details.preferedNeighbour",
"details.avoidNeighbour",
"tour.job.collected",
"no",
"yes",
"tour.start",
"tour.details",
"tour.job.signature",
"tour.job.delivered",
"select",
"tour.job.safePlace",
"safeplace"... | ---
tags:
- text-classification
widget:
- text: "Wo kann ich das Paket ablegen?"
license: mit
language: de
---
# Multi2ConvAI-Logistics: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,097 | inovex/multi2convai-logistics-en-bert | [
"details.address",
"tour.postcode.select",
"tour.finish",
"details.safeplace",
"details.preferedNeighbour",
"details.avoidNeighbour",
"tour.job.collected",
"no",
"yes",
"tour.start",
"tour.details",
"tour.job.signature",
"tour.job.delivered",
"select",
"tour.job.safePlace",
"safeplace"... | ---
tags:
- text-classification
widget:
- text: "Where can I put the parcel?"
license: mit
language: en
---
# Multi2ConvAI-Logistics: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,098 | inovex/multi2convai-logistics-hr-bert | [
"details.address",
"tour.postcode.select",
"tour.finish",
"details.safeplace",
"details.preferedNeighbour",
"details.avoidNeighbour",
"tour.job.collected",
"no",
"yes",
"tour.start",
"tour.details",
"tour.job.signature",
"tour.job.delivered",
"select",
"tour.job.safePlace",
"safeplace"... | ---
tags:
- text-classification
widget:
- text: "gdje mogu staviti paket?"
license: mit
language: hr
---
# Multi2ConvAI-Logistics: finetuned Bert for Croatian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Croatian (hr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-hr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-hr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,099 | inovex/multi2convai-logistics-pl-bert | [
"details.address",
"tour.postcode.select",
"tour.finish",
"details.safeplace",
"details.preferedNeighbour",
"details.avoidNeighbour",
"tour.job.collected",
"no",
"yes",
"tour.start",
"tour.details",
"tour.job.signature",
"tour.job.delivered",
"select",
"tour.job.safePlace",
"safeplace"... | ---
tags:
- text-classification
widget:
- text: "gdzie mogę umieścić paczkę?"
license: mit
language: pl
---
# Multi2ConvAI-Logistics: finetuned Bert for Polish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Polish (pl)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-pl-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-pl-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,100 | inovex/multi2convai-logistics-tr-bert | [
"details.address",
"tour.postcode.select",
"tour.finish",
"details.safeplace",
"details.preferedNeighbour",
"details.avoidNeighbour",
"tour.job.collected",
"no",
"yes",
"tour.start",
"tour.details",
"tour.job.signature",
"tour.job.delivered",
"select",
"tour.job.safePlace",
"safeplace"... | ---
tags:
- text-classification
widget:
- text: "paketi nereye koyabilirim?"
license: mit
language: tr
---
# Multi2ConvAI-Logistics: finetuned Bert for Turkish
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Turkish (tr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-tr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-tr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,101 | inovex/multi2convai-quality-de-bert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,102 | inovex/multi2convai-quality-de-mbert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned MBert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,103 | inovex/multi2convai-quality-en-bert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Start the program"
license: mit
language: en
---
# Multi2ConvAI-Quality: finetuned Bert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,104 | inovex/multi2convai-quality-en-mbert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Start the program"
license: mit
language: en
---
# Multi2ConvAI-Quality: finetuned MBert for English
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: English (en)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-en-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-en-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,105 | inovex/multi2convai-quality-fr-bert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Lancer le programme"
license: mit
language: fr
---
# Multi2ConvAI-Quality: finetuned Bert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,106 | inovex/multi2convai-quality-fr-mbert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Lancer le programme"
license: mit
language: fr
---
# Multi2ConvAI-Quality: finetuned MBert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,107 | inovex/multi2convai-quality-it-bert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Avviare il programma"
license: mit
language: it
---
# Multi2ConvAI-Quality: finetuned Bert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,108 | inovex/multi2convai-quality-it-mbert | [
"neo.magnetklammern",
"neo.start",
"neo.back",
"neo.gearbox",
"neo.motor.brushcollar",
"neo.motor.worm",
"neo.magnet",
"neo.magnetisierung",
"neo.motor",
"neo.verschaubung",
"neo.zusammenfuehrung",
"neo.zahnradgross",
"neo.zahnradklein",
"neo.yes",
"neo.no",
"neo.einpressen",
"neo.mo... | ---
tags:
- text-classification
widget:
- text: "Avviare il programma"
license: mit
language: it
---
# Multi2ConvAI-Quality: finetuned MBert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: info@multi2conv.ai |
1,109 | ipuneetrathore/bert-base-cased-finetuned-finBERT | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ## FinBERT
Code for importing and using this model is available [here](https://github.com/ipuneetrathore/BERT_models)
|
1,110 | ishan/bert-base-uncased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# bert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [bert-base-uncased](https://huggingface.co/bert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8456 |
| Mismatched | 0.8484 |
|
1,111 | ishan/distilbert-base-uncased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# distilbert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8223 |
| Mismatched | 0.8216 |
|
1,112 | ismaelardo/BETO_3d | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Este es el primer modelo de prueba BETO_3D |
1,113 | ivanlau/language-detection-fine-tuned-on-xlm-roberta-base | [
"Arabic",
"Basque",
"Breton",
"Catalan",
"Chinese_China",
"Chinese_Hongkong",
"Chinese_Taiwan",
"Chuvash",
"Czech",
"Dhivehi",
"Dutch",
"English",
"Esperanto",
"Estonian",
"French",
"Frisian",
"Georgian",
"German",
"Greek",
"Hakha_Chin",
"Indonesian",
"Interlingua",
"Ital... | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: language-detection-fine-tuned-on-xlm-roberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: common_language
type: common_language
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.9738386718094919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-fine-tuned-on-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [common_language](https://huggingface.co/datasets/common_language) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1886
- Accuracy: 0.9738
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1 | 1.0 | 22194 | 0.1886 | 0.9738 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
### Notebook
[notebook](https://github.com/IvanLauLinTiong/language-detector/blob/main/xlm_roberta_base_commonlanguage_language_detector.ipynb) |
1,114 | j-hartmann/emotion-english-distilroberta-base | [
"anger",
"disgust",
"fear",
"joy",
"neutral",
"sadness",
"surprise"
] | ---
language: "en"
tags:
- distilroberta
- sentiment
- emotion
- twitter
- reddit
widget:
- text: "Oh wow. I didn't know that."
- text: "This movie always makes me cry.."
- text: "Oh Happy Day"
---
# Emotion English DistilRoBERTa-base
# Description ℹ
With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets (see Appendix below) and predicts Ekman's 6 basic emotions, plus a neutral class:
1) anger 🤬
2) disgust 🤢
3) fear 😨
4) joy 😀
5) neutral 😐
6) sadness 😭
7) surprise 😲
The model is a fine-tuned checkpoint of [DistilRoBERTa-base](https://huggingface.co/distilroberta-base). For a 'non-distilled' emotion model, please refer to the model card of the [RoBERTa-large](https://huggingface.co/j-hartmann/emotion-english-roberta-large) version.
# Application 🚀
a) Run emotion model with 3 lines of code on single text example using Hugging Face's pipeline command on Google Colab:
[](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/simple_emotion_pipeline.ipynb)
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="j-hartmann/emotion-english-distilroberta-base", return_all_scores=True)
classifier("I love this!")
```
```python
Output:
[[{'label': 'anger', 'score': 0.004419783595949411},
{'label': 'disgust', 'score': 0.0016119900392368436},
{'label': 'fear', 'score': 0.0004138521908316761},
{'label': 'joy', 'score': 0.9771687984466553},
{'label': 'neutral', 'score': 0.005764586851000786},
{'label': 'sadness', 'score': 0.002092392183840275},
{'label': 'surprise', 'score': 0.008528684265911579}]]
```
b) Run emotion model on multiple examples and full datasets (e.g., .csv files) on Google Colab:
[](https://colab.research.google.com/github/j-hartmann/emotion-english-distilroberta-base/blob/main/emotion_prediction_example.ipynb)
# Contact 💻
Please reach out to [jochen.hartmann@tum.de](mailto:jochen.hartmann@tum.de) if you have any questions or feedback.
Thanks to Samuel Domdey and [chrsiebert](https://huggingface.co/siebert) for their support in making this model available.
# Reference ✅
For attribution, please cite the following reference if you use this model. A working paper will be available soon.
```
Jochen Hartmann, "Emotion English DistilRoBERTa-base". https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/, 2022.
```
BibTex citation:
```
@misc{hartmann2022emotionenglish,
author={Hartmann, Jochen},
title={Emotion English DistilRoBERTa-base},
year={2022},
howpublished = {\url{https://huggingface.co/j-hartmann/emotion-english-distilroberta-base/}},
}
```
# Appendix 📚
Please find an overview of the datasets used for training below. All datasets contain English text. The table summarizes which emotions are available in each of the datasets. The datasets represent a diverse collection of text types. Specifically, they contain emotion labels for texts from Twitter, Reddit, student self-reports, and utterances from TV dialogues. As MELD (Multimodal EmotionLines Dataset) extends the popular EmotionLines dataset, EmotionLines itself is not included here.
|Name|anger|disgust|fear|joy|neutral|sadness|surprise|
|---|---|---|---|---|---|---|---|
|Crowdflower (2016)|Yes|-|-|Yes|Yes|Yes|Yes|
|Emotion Dataset, Elvis et al. (2018)|Yes|-|Yes|Yes|-|Yes|Yes|
|GoEmotions, Demszky et al. (2020)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|ISEAR, Vikash (2018)|Yes|Yes|Yes|Yes|-|Yes|-|
|MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-|
The model is trained on a balanced subset from the datasets listed above (2,811 observations per emotion, i.e., nearly 20k observations in total). 80% of this balanced subset is used for training and 20% for evaluation. The evaluation accuracy is 66% (vs. the random-chance baseline of 1/7 = 14%).
# Scientific Applications 📖
Below you can find a list of papers using "Emotion English DistilRoBERTa-base". If you would like your paper to be added to the list, please send me an email.
- Butt, S., Sharma, S., Sharma, R., Sidorov, G., & Gelbukh, A. (2022). What goes on inside rumour and non-rumour tweets and their reactions: A Psycholinguistic Analyses. Computers in Human Behavior, 107345.
- Kuang, Z., Zong, S., Zhang, J., Chen, J., & Liu, H. (2022). Music-to-Text Synaesthesia: Generating Descriptive Text from Music Recordings. arXiv preprint arXiv:2210.00434.
- Rozado, D., Hughes, R., & Halberstadt, J. (2022). Longitudinal analysis of sentiment and emotion in news media headlines using automated labelling with Transformer language models. Plos one, 17(10), e0276367. |
1,115 | j-hartmann/emotion-english-roberta-large | [
"anger",
"disgust",
"fear",
"joy",
"neutral",
"sadness",
"surprise"
] | ---
language: "en"
tags:
- roberta
- sentiment
- emotion
- twitter
- reddit
widget:
- text: "Oh wow. I didn't know that."
- text: "This movie always makes me cry.."
- text: "Oh Happy Day"
---
## Description ℹ
With this model, you can classify emotions in English text data. The model was trained on 6 diverse datasets and predicts Ekman's 6 basic emotions, plus a neutral class:
1) anger 🤬
2) disgust 🤢
3) fear 😨
4) joy 😀
5) neutral 😐
6) sadness 😭
7) surprise 😲
The model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large).
For further details on this emotion model, please refer to the model card of its [DistilRoBERTa](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) version. |
1,116 | j-hartmann/mind-perception-roberta-base | [
"low",
"high"
] | ---
language: "en"
tags:
- roberta
widget:
- text: "Alexa is part of our family. She is simply amazing!"
- text: "I use my smart assistant for may things. It's incredibly useful."
---
This RoBERTa-based model ("MindMiner") can classify the degree of mind perception in English language text in 2 classes:
- high mind perception 👩
- low mind perception 🤖
The model was fine-tuned on 997 manually annotated open-ended survey responses.
The hold-out accuracy is 75.5% (vs. a balanced 50% random-chance baseline).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.