modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
SetFit/distilbert-base-uncased__sst2__train-8-2 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7081 | 1.0 | 3 | 0.7031 | 0.25 |
| 0.6853 | 2.0 | 6 | 0.7109 | 0.25 |
| 0.6696 | 3.0 | 9 | 0.7211 | 0.25 |
| 0.6174 | 4.0 | 12 | 0.7407 | 0.25 |
| 0.5717 | 5.0 | 15 | 0.7625 | 0.25 |
| 0.5096 | 6.0 | 18 | 0.7732 | 0.25 |
| 0.488 | 7.0 | 21 | 0.7798 | 0.25 |
| 0.4023 | 8.0 | 24 | 0.7981 | 0.25 |
| 0.3556 | 9.0 | 27 | 0.8110 | 0.25 |
| 0.2714 | 10.0 | 30 | 0.8269 | 0.25 |
| 0.2295 | 11.0 | 33 | 0.8276 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
SetFit/distilbert-base-uncased__sst2__train-8-6 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5336
- Accuracy: 0.7523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7161 | 1.0 | 3 | 0.6941 | 0.5 |
| 0.6786 | 2.0 | 6 | 0.7039 | 0.25 |
| 0.6586 | 3.0 | 9 | 0.7090 | 0.25 |
| 0.6121 | 4.0 | 12 | 0.7183 | 0.25 |
| 0.5696 | 5.0 | 15 | 0.7266 | 0.25 |
| 0.522 | 6.0 | 18 | 0.7305 | 0.25 |
| 0.4899 | 7.0 | 21 | 0.7339 | 0.25 |
| 0.3985 | 8.0 | 24 | 0.7429 | 0.25 |
| 0.3758 | 9.0 | 27 | 0.7224 | 0.25 |
| 0.2876 | 10.0 | 30 | 0.7068 | 0.5 |
| 0.2498 | 11.0 | 33 | 0.6751 | 0.75 |
| 0.1921 | 12.0 | 36 | 0.6487 | 0.75 |
| 0.1491 | 13.0 | 39 | 0.6261 | 0.75 |
| 0.1276 | 14.0 | 42 | 0.6102 | 0.75 |
| 0.0996 | 15.0 | 45 | 0.5964 | 0.75 |
| 0.073 | 16.0 | 48 | 0.6019 | 0.75 |
| 0.0627 | 17.0 | 51 | 0.5933 | 0.75 |
| 0.053 | 18.0 | 54 | 0.5768 | 0.75 |
| 0.0403 | 19.0 | 57 | 0.5698 | 0.75 |
| 0.0328 | 20.0 | 60 | 0.5656 | 0.75 |
| 0.03 | 21.0 | 63 | 0.5634 | 0.75 |
| 0.025 | 22.0 | 66 | 0.5620 | 0.75 |
| 0.0209 | 23.0 | 69 | 0.5623 | 0.75 |
| 0.0214 | 24.0 | 72 | 0.5606 | 0.75 |
| 0.0191 | 25.0 | 75 | 0.5565 | 0.75 |
| 0.0173 | 26.0 | 78 | 0.5485 | 0.75 |
| 0.0175 | 27.0 | 81 | 0.5397 | 0.75 |
| 0.0132 | 28.0 | 84 | 0.5322 | 0.75 |
| 0.0138 | 29.0 | 87 | 0.5241 | 0.75 |
| 0.0128 | 30.0 | 90 | 0.5235 | 0.75 |
| 0.0126 | 31.0 | 93 | 0.5253 | 0.75 |
| 0.012 | 32.0 | 96 | 0.5317 | 0.75 |
| 0.0118 | 33.0 | 99 | 0.5342 | 0.75 |
| 0.0092 | 34.0 | 102 | 0.5388 | 0.75 |
| 0.0117 | 35.0 | 105 | 0.5414 | 0.75 |
| 0.0124 | 36.0 | 108 | 0.5453 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.5506 | 0.75 |
| 0.0112 | 38.0 | 114 | 0.5555 | 0.75 |
| 0.0087 | 39.0 | 117 | 0.5597 | 0.75 |
| 0.01 | 40.0 | 120 | 0.5640 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 3,841 |
SetFit/distilbert-base-uncased__sst2__train-8-7 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.4618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7156 | 1.0 | 3 | 0.6965 | 0.25 |
| 0.6645 | 2.0 | 6 | 0.7059 | 0.25 |
| 0.6368 | 3.0 | 9 | 0.7179 | 0.25 |
| 0.5944 | 4.0 | 12 | 0.7408 | 0.25 |
| 0.5369 | 5.0 | 15 | 0.7758 | 0.25 |
| 0.449 | 6.0 | 18 | 0.8009 | 0.25 |
| 0.4352 | 7.0 | 21 | 0.8209 | 0.5 |
| 0.3462 | 8.0 | 24 | 0.8470 | 0.5 |
| 0.3028 | 9.0 | 27 | 0.8579 | 0.5 |
| 0.2365 | 10.0 | 30 | 0.8704 | 0.5 |
| 0.2023 | 11.0 | 33 | 0.8770 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
SetFit/distilbert-base-uncased__sst2__train-8-8 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7061 | 1.0 | 3 | 0.6899 | 0.75 |
| 0.6627 | 2.0 | 6 | 0.7026 | 0.25 |
| 0.644 | 3.0 | 9 | 0.7158 | 0.25 |
| 0.6087 | 4.0 | 12 | 0.7325 | 0.25 |
| 0.5602 | 5.0 | 15 | 0.7555 | 0.25 |
| 0.5034 | 6.0 | 18 | 0.7725 | 0.25 |
| 0.4672 | 7.0 | 21 | 0.7983 | 0.25 |
| 0.403 | 8.0 | 24 | 0.8314 | 0.25 |
| 0.3571 | 9.0 | 27 | 0.8555 | 0.25 |
| 0.2792 | 10.0 | 30 | 0.9065 | 0.25 |
| 0.2373 | 11.0 | 33 | 0.9286 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
SetFit/distilbert-base-uncased__sst2__train-8-9 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7204 | 1.0 | 3 | 0.7025 | 0.5 |
| 0.6885 | 2.0 | 6 | 0.7145 | 0.5 |
| 0.6662 | 3.0 | 9 | 0.7222 | 0.5 |
| 0.6182 | 4.0 | 12 | 0.7427 | 0.25 |
| 0.5707 | 5.0 | 15 | 0.7773 | 0.25 |
| 0.5247 | 6.0 | 18 | 0.8137 | 0.25 |
| 0.5003 | 7.0 | 21 | 0.8556 | 0.25 |
| 0.4195 | 8.0 | 24 | 0.9089 | 0.5 |
| 0.387 | 9.0 | 27 | 0.9316 | 0.25 |
| 0.2971 | 10.0 | 30 | 0.9558 | 0.25 |
| 0.2581 | 11.0 | 33 | 0.9420 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
SetFit/distilbert-base-uncased__subj__train-8-1 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5488
- Accuracy: 0.791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.703 | 1.0 | 3 | 0.6906 | 0.5 |
| 0.666 | 2.0 | 6 | 0.6945 | 0.25 |
| 0.63 | 3.0 | 9 | 0.6885 | 0.5 |
| 0.588 | 4.0 | 12 | 0.6888 | 0.25 |
| 0.5181 | 5.0 | 15 | 0.6899 | 0.25 |
| 0.4508 | 6.0 | 18 | 0.6770 | 0.5 |
| 0.4025 | 7.0 | 21 | 0.6579 | 0.5 |
| 0.3361 | 8.0 | 24 | 0.6392 | 0.5 |
| 0.2919 | 9.0 | 27 | 0.6113 | 0.5 |
| 0.2151 | 10.0 | 30 | 0.5774 | 0.75 |
| 0.1728 | 11.0 | 33 | 0.5248 | 0.75 |
| 0.1313 | 12.0 | 36 | 0.4824 | 0.75 |
| 0.1046 | 13.0 | 39 | 0.4456 | 0.75 |
| 0.0858 | 14.0 | 42 | 0.4076 | 0.75 |
| 0.0679 | 15.0 | 45 | 0.3755 | 0.75 |
| 0.0485 | 16.0 | 48 | 0.3422 | 0.75 |
| 0.0416 | 17.0 | 51 | 0.3055 | 0.75 |
| 0.0358 | 18.0 | 54 | 0.2731 | 1.0 |
| 0.0277 | 19.0 | 57 | 0.2443 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.2187 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.1960 | 1.0 |
| 0.0187 | 22.0 | 66 | 0.1762 | 1.0 |
| 0.017 | 23.0 | 69 | 0.1629 | 1.0 |
| 0.0154 | 24.0 | 72 | 0.1543 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.1476 | 1.0 |
| 0.0131 | 26.0 | 78 | 0.1423 | 1.0 |
| 0.0139 | 27.0 | 81 | 0.1387 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.1360 | 1.0 |
| 0.0108 | 29.0 | 87 | 0.1331 | 1.0 |
| 0.0105 | 30.0 | 90 | 0.1308 | 1.0 |
| 0.0106 | 31.0 | 93 | 0.1276 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1267 | 1.0 |
| 0.0095 | 33.0 | 99 | 0.1255 | 1.0 |
| 0.0076 | 34.0 | 102 | 0.1243 | 1.0 |
| 0.0094 | 35.0 | 105 | 0.1235 | 1.0 |
| 0.0103 | 36.0 | 108 | 0.1228 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.1231 | 1.0 |
| 0.0094 | 38.0 | 114 | 0.1236 | 1.0 |
| 0.0074 | 39.0 | 117 | 0.1240 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1246 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1253 | 1.0 |
| 0.0088 | 42.0 | 126 | 0.1248 | 1.0 |
| 0.0082 | 43.0 | 129 | 0.1244 | 1.0 |
| 0.0082 | 44.0 | 132 | 0.1234 | 1.0 |
| 0.0082 | 45.0 | 135 | 0.1223 | 1.0 |
| 0.0071 | 46.0 | 138 | 0.1212 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1208 | 1.0 |
| 0.0081 | 48.0 | 144 | 0.1205 | 1.0 |
| 0.0067 | 49.0 | 147 | 0.1202 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1202 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,460 |
SetFit/distilbert-base-uncased__subj__train-8-2 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3081
- Accuracy: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7146 | 1.0 | 3 | 0.6798 | 0.75 |
| 0.6737 | 2.0 | 6 | 0.6847 | 0.75 |
| 0.6519 | 3.0 | 9 | 0.6783 | 0.75 |
| 0.6105 | 4.0 | 12 | 0.6812 | 0.25 |
| 0.5463 | 5.0 | 15 | 0.6869 | 0.25 |
| 0.4922 | 6.0 | 18 | 0.6837 | 0.5 |
| 0.4543 | 7.0 | 21 | 0.6716 | 0.5 |
| 0.3856 | 8.0 | 24 | 0.6613 | 0.75 |
| 0.3475 | 9.0 | 27 | 0.6282 | 0.75 |
| 0.2717 | 10.0 | 30 | 0.6045 | 0.75 |
| 0.2347 | 11.0 | 33 | 0.5620 | 0.75 |
| 0.1979 | 12.0 | 36 | 0.5234 | 1.0 |
| 0.1535 | 13.0 | 39 | 0.4771 | 1.0 |
| 0.1332 | 14.0 | 42 | 0.4277 | 1.0 |
| 0.1041 | 15.0 | 45 | 0.3785 | 1.0 |
| 0.082 | 16.0 | 48 | 0.3318 | 1.0 |
| 0.0672 | 17.0 | 51 | 0.2885 | 1.0 |
| 0.0538 | 18.0 | 54 | 0.2568 | 1.0 |
| 0.0412 | 19.0 | 57 | 0.2356 | 1.0 |
| 0.0361 | 20.0 | 60 | 0.2217 | 1.0 |
| 0.0303 | 21.0 | 63 | 0.2125 | 1.0 |
| 0.0268 | 22.0 | 66 | 0.2060 | 1.0 |
| 0.0229 | 23.0 | 69 | 0.2015 | 1.0 |
| 0.0215 | 24.0 | 72 | 0.1989 | 1.0 |
| 0.0211 | 25.0 | 75 | 0.1969 | 1.0 |
| 0.0172 | 26.0 | 78 | 0.1953 | 1.0 |
| 0.0165 | 27.0 | 81 | 0.1935 | 1.0 |
| 0.0132 | 28.0 | 84 | 0.1923 | 1.0 |
| 0.0146 | 29.0 | 87 | 0.1914 | 1.0 |
| 0.0125 | 30.0 | 90 | 0.1904 | 1.0 |
| 0.0119 | 31.0 | 93 | 0.1897 | 1.0 |
| 0.0122 | 32.0 | 96 | 0.1886 | 1.0 |
| 0.0118 | 33.0 | 99 | 0.1875 | 1.0 |
| 0.0097 | 34.0 | 102 | 0.1866 | 1.0 |
| 0.0111 | 35.0 | 105 | 0.1861 | 1.0 |
| 0.0111 | 36.0 | 108 | 0.1855 | 1.0 |
| 0.0102 | 37.0 | 111 | 0.1851 | 1.0 |
| 0.0109 | 38.0 | 114 | 0.1851 | 1.0 |
| 0.0085 | 39.0 | 117 | 0.1854 | 1.0 |
| 0.0089 | 40.0 | 120 | 0.1855 | 1.0 |
| 0.0092 | 41.0 | 123 | 0.1863 | 1.0 |
| 0.0105 | 42.0 | 126 | 0.1868 | 1.0 |
| 0.0089 | 43.0 | 129 | 0.1874 | 1.0 |
| 0.0091 | 44.0 | 132 | 0.1877 | 1.0 |
| 0.0096 | 45.0 | 135 | 0.1881 | 1.0 |
| 0.0081 | 46.0 | 138 | 0.1881 | 1.0 |
| 0.0086 | 47.0 | 141 | 0.1883 | 1.0 |
| 0.009 | 48.0 | 144 | 0.1884 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,337 |
SetFit/distilbert-base-uncased__subj__train-8-3 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3496
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7136 | 1.0 | 3 | 0.6875 | 0.75 |
| 0.6702 | 2.0 | 6 | 0.6824 | 0.75 |
| 0.6456 | 3.0 | 9 | 0.6687 | 0.75 |
| 0.5934 | 4.0 | 12 | 0.6564 | 0.75 |
| 0.537 | 5.0 | 15 | 0.6428 | 0.75 |
| 0.4812 | 6.0 | 18 | 0.6180 | 0.75 |
| 0.4279 | 7.0 | 21 | 0.5864 | 0.75 |
| 0.3608 | 8.0 | 24 | 0.5540 | 0.75 |
| 0.3076 | 9.0 | 27 | 0.5012 | 1.0 |
| 0.2292 | 10.0 | 30 | 0.4497 | 1.0 |
| 0.1991 | 11.0 | 33 | 0.3945 | 1.0 |
| 0.1495 | 12.0 | 36 | 0.3483 | 1.0 |
| 0.1176 | 13.0 | 39 | 0.3061 | 1.0 |
| 0.0947 | 14.0 | 42 | 0.2683 | 1.0 |
| 0.0761 | 15.0 | 45 | 0.2295 | 1.0 |
| 0.0584 | 16.0 | 48 | 0.1996 | 1.0 |
| 0.0451 | 17.0 | 51 | 0.1739 | 1.0 |
| 0.0387 | 18.0 | 54 | 0.1521 | 1.0 |
| 0.0272 | 19.0 | 57 | 0.1333 | 1.0 |
| 0.0247 | 20.0 | 60 | 0.1171 | 1.0 |
| 0.0243 | 21.0 | 63 | 0.1044 | 1.0 |
| 0.0206 | 22.0 | 66 | 0.0943 | 1.0 |
| 0.0175 | 23.0 | 69 | 0.0859 | 1.0 |
| 0.0169 | 24.0 | 72 | 0.0799 | 1.0 |
| 0.0162 | 25.0 | 75 | 0.0746 | 1.0 |
| 0.0137 | 26.0 | 78 | 0.0705 | 1.0 |
| 0.0141 | 27.0 | 81 | 0.0674 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.0654 | 1.0 |
| 0.0117 | 29.0 | 87 | 0.0634 | 1.0 |
| 0.0113 | 30.0 | 90 | 0.0617 | 1.0 |
| 0.0107 | 31.0 | 93 | 0.0599 | 1.0 |
| 0.0106 | 32.0 | 96 | 0.0585 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.0568 | 1.0 |
| 0.0084 | 34.0 | 102 | 0.0553 | 1.0 |
| 0.0101 | 35.0 | 105 | 0.0539 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.0529 | 1.0 |
| 0.009 | 37.0 | 111 | 0.0520 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.0511 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0504 | 1.0 |
| 0.0081 | 40.0 | 120 | 0.0497 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.0492 | 1.0 |
| 0.0092 | 42.0 | 126 | 0.0488 | 1.0 |
| 0.008 | 43.0 | 129 | 0.0483 | 1.0 |
| 0.0087 | 44.0 | 132 | 0.0479 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0474 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0470 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0467 | 1.0 |
| 0.008 | 48.0 | 144 | 0.0465 | 1.0 |
| 0.0069 | 49.0 | 147 | 0.0464 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.0464 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,460 |
SetFit/distilbert-base-uncased__subj__train-8-6 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6923 | 0.5 |
| 0.6648 | 2.0 | 6 | 0.6838 | 0.5 |
| 0.6329 | 3.0 | 9 | 0.6747 | 0.75 |
| 0.5836 | 4.0 | 12 | 0.6693 | 0.5 |
| 0.5287 | 5.0 | 15 | 0.6670 | 0.25 |
| 0.4585 | 6.0 | 18 | 0.6517 | 0.5 |
| 0.415 | 7.0 | 21 | 0.6290 | 0.5 |
| 0.3353 | 8.0 | 24 | 0.6019 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.5613 | 0.75 |
| 0.2203 | 10.0 | 30 | 0.5222 | 1.0 |
| 0.1743 | 11.0 | 33 | 0.4769 | 1.0 |
| 0.1444 | 12.0 | 36 | 0.4597 | 1.0 |
| 0.1079 | 13.0 | 39 | 0.4462 | 1.0 |
| 0.0891 | 14.0 | 42 | 0.4216 | 1.0 |
| 0.0704 | 15.0 | 45 | 0.3880 | 1.0 |
| 0.0505 | 16.0 | 48 | 0.3663 | 1.0 |
| 0.0428 | 17.0 | 51 | 0.3536 | 1.0 |
| 0.0356 | 18.0 | 54 | 0.3490 | 1.0 |
| 0.0283 | 19.0 | 57 | 0.3531 | 1.0 |
| 0.025 | 20.0 | 60 | 0.3595 | 1.0 |
| 0.0239 | 21.0 | 63 | 0.3594 | 1.0 |
| 0.0202 | 22.0 | 66 | 0.3521 | 1.0 |
| 0.0168 | 23.0 | 69 | 0.3475 | 1.0 |
| 0.0159 | 24.0 | 72 | 0.3458 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.3409 | 1.0 |
| 0.0132 | 26.0 | 78 | 0.3360 | 1.0 |
| 0.0137 | 27.0 | 81 | 0.3302 | 1.0 |
| 0.0112 | 28.0 | 84 | 0.3235 | 1.0 |
| 0.0113 | 29.0 | 87 | 0.3178 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.3159 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.3108 | 1.0 |
| 0.0107 | 32.0 | 96 | 0.3101 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.3100 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.3110 | 1.0 |
| 0.0092 | 35.0 | 105 | 0.3117 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.3104 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.3086 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.3047 | 1.0 |
| 0.0072 | 39.0 | 117 | 0.3024 | 1.0 |
| 0.0079 | 40.0 | 120 | 0.3014 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.2983 | 1.0 |
| 0.0091 | 42.0 | 126 | 0.2948 | 1.0 |
| 0.0077 | 43.0 | 129 | 0.2915 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.2890 | 1.0 |
| 0.009 | 45.0 | 135 | 0.2870 | 1.0 |
| 0.0073 | 46.0 | 138 | 0.2856 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.2844 | 1.0 |
| 0.0076 | 48.0 | 144 | 0.2841 | 1.0 |
| 0.0065 | 49.0 | 147 | 0.2836 | 1.0 |
| 0.0081 | 50.0 | 150 | 0.2835 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,461 |
SetFit/distilbert-base-uncased__subj__train-8-8 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3160
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7187 | 1.0 | 3 | 0.6776 | 1.0 |
| 0.684 | 2.0 | 6 | 0.6608 | 1.0 |
| 0.6532 | 3.0 | 9 | 0.6364 | 1.0 |
| 0.5996 | 4.0 | 12 | 0.6119 | 1.0 |
| 0.5242 | 5.0 | 15 | 0.5806 | 1.0 |
| 0.4612 | 6.0 | 18 | 0.5320 | 1.0 |
| 0.4192 | 7.0 | 21 | 0.4714 | 1.0 |
| 0.3274 | 8.0 | 24 | 0.4071 | 1.0 |
| 0.2871 | 9.0 | 27 | 0.3378 | 1.0 |
| 0.2082 | 10.0 | 30 | 0.2822 | 1.0 |
| 0.1692 | 11.0 | 33 | 0.2271 | 1.0 |
| 0.1242 | 12.0 | 36 | 0.1793 | 1.0 |
| 0.0977 | 13.0 | 39 | 0.1417 | 1.0 |
| 0.0776 | 14.0 | 42 | 0.1117 | 1.0 |
| 0.0631 | 15.0 | 45 | 0.0894 | 1.0 |
| 0.0453 | 16.0 | 48 | 0.0733 | 1.0 |
| 0.0399 | 17.0 | 51 | 0.0617 | 1.0 |
| 0.0333 | 18.0 | 54 | 0.0528 | 1.0 |
| 0.0266 | 19.0 | 57 | 0.0454 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.0393 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.0345 | 1.0 |
| 0.0195 | 22.0 | 66 | 0.0309 | 1.0 |
| 0.0161 | 23.0 | 69 | 0.0281 | 1.0 |
| 0.0167 | 24.0 | 72 | 0.0260 | 1.0 |
| 0.0163 | 25.0 | 75 | 0.0242 | 1.0 |
| 0.0134 | 26.0 | 78 | 0.0227 | 1.0 |
| 0.0128 | 27.0 | 81 | 0.0214 | 1.0 |
| 0.0101 | 28.0 | 84 | 0.0204 | 1.0 |
| 0.0109 | 29.0 | 87 | 0.0194 | 1.0 |
| 0.0112 | 30.0 | 90 | 0.0186 | 1.0 |
| 0.0108 | 31.0 | 93 | 0.0179 | 1.0 |
| 0.011 | 32.0 | 96 | 0.0174 | 1.0 |
| 0.0099 | 33.0 | 99 | 0.0169 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.0164 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0160 | 1.0 |
| 0.01 | 36.0 | 108 | 0.0156 | 1.0 |
| 0.0084 | 37.0 | 111 | 0.0152 | 1.0 |
| 0.0089 | 38.0 | 114 | 0.0149 | 1.0 |
| 0.0073 | 39.0 | 117 | 0.0146 | 1.0 |
| 0.0082 | 40.0 | 120 | 0.0143 | 1.0 |
| 0.008 | 41.0 | 123 | 0.0141 | 1.0 |
| 0.0093 | 42.0 | 126 | 0.0139 | 1.0 |
| 0.0078 | 43.0 | 129 | 0.0138 | 1.0 |
| 0.0086 | 44.0 | 132 | 0.0136 | 1.0 |
| 0.009 | 45.0 | 135 | 0.0135 | 1.0 |
| 0.0072 | 46.0 | 138 | 0.0134 | 1.0 |
| 0.0075 | 47.0 | 141 | 0.0133 | 1.0 |
| 0.0082 | 48.0 | 144 | 0.0133 | 1.0 |
| 0.0068 | 49.0 | 147 | 0.0132 | 1.0 |
| 0.0074 | 50.0 | 150 | 0.0132 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,461 |
SongRb/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.5332198659134496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8549
- Matthews Correlation: 0.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5213 | 1.0 | 535 | 0.5163 | 0.4183 |
| 0.3479 | 2.0 | 1070 | 0.5351 | 0.5182 |
| 0.231 | 3.0 | 1605 | 0.6271 | 0.5291 |
| 0.166 | 4.0 | 2140 | 0.7531 | 0.5279 |
| 0.1313 | 5.0 | 2675 | 0.8549 | 0.5332 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,997 |
TehranNLP/albert-base-v2-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP/bert-base-uncased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP/electra-base-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-avg-cola-2e-5-21 | null | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-avg-cola-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-avg-cola-2e-5-63 | null | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-qqp-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/electra-base-ag-news-2e-5-42 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
TehranNLP-org/electra-base-avg-cola-2e-5-63 | null | Entry not found | 15 |
TehranNLP-org/electra-base-avg-mnli-2e-5-21 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP-org/electra-base-avg-qqp-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/electra-base-avg-sst2-2e-5-21 | null | Entry not found | 15 |
TehranNLP-org/electra-base-avg-sst2-2e-5-63 | null | Entry not found | 15 |
TehranNLP-org/electra-base-mrpc-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/electra-base-qqp-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/electra-base-qqp-cls-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/xlnet-base-cased-avg-cola-2e-5-21 | null | Entry not found | 15 |
TehranNLP-org/xlnet-base-cased-avg-sst2-2e-5-42 | null | Entry not found | 15 |
TehranNLP-org/xlnet-base-cased-avg-sst2-2e-5-63 | null | Entry not found | 15 |
Tejas3/distillbert_110_uncased_v1 | [
"action",
"drama",
"horror",
"sci_fi",
"superhero",
"thriller"
] | Entry not found | 15 |
The-Data-Hound/bacteria_lamp_network | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
TransQuest/monotransquest-hter-en_lv-it-smt | [
"LABEL_0"
] | ---
language: en-lv
tags:
- Quality Estimation
- monotransquest
- hter
license: apache-2.0
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_lv-it-smt", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 5,407 |
V3RX2000/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5396261051709696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Matthews Correlation: 0.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5509 | 0.3827 |
| 0.3498 | 2.0 | 1070 | 0.4936 | 0.5295 |
| 0.2369 | 3.0 | 1605 | 0.6505 | 0.5248 |
| 0.1637 | 4.0 | 2140 | 0.8107 | 0.5396 |
| 0.1299 | 5.0 | 2675 | 0.8738 | 0.5387 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,999 |
VirenS13117/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5286324175580216
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7809
- Matthews Correlation: 0.5286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5299 | 1.0 | 535 | 0.5040 | 0.4383 |
| 0.3472 | 2.0 | 1070 | 0.5284 | 0.4911 |
| 0.2333 | 3.0 | 1605 | 0.6633 | 0.5091 |
| 0.1733 | 4.0 | 2140 | 0.7809 | 0.5286 |
| 0.1255 | 5.0 | 2675 | 0.8894 | 0.5282 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,999 |
Wiirin/BERT-finetuned-PubMed-FoodCancer | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
XYHY/autonlp-123-478412765 | [
"0",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- XYHY/autonlp-data-123
co2_eq_emissions: 69.86520391863117
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 478412765
- CO2 Emissions (in grams): 69.86520391863117
## Validation Metrics
- Loss: 0.186362624168396
- Accuracy: 0.9539955699437723
- Precision: 0.9527454242928453
- Recall: 0.9572049481778669
- AUC: 0.9903929997079495
- F1: 0.9549699799866577
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/XYHY/autonlp-123-478412765
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("XYHY/autonlp-123-478412765", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("XYHY/autonlp-123-478412765", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,117 |
Yanjie/message-preamble | [
"blank",
"great",
"welcome",
"no_worries",
"thanks",
"sorry",
"sure",
"got_it",
"alright",
"no_rush",
"confirmation",
"disagreement",
"will_do",
"understand",
"funny"
] | This is the concierge preamble model. Fined tuned on DistilBert uncased model. | 78 |
ZZDDBBCC/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5410897632107913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8631
- Matthews Correlation: 0.5411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5249 | 1.0 | 535 | 0.5300 | 0.4152 |
| 0.3489 | 2.0 | 1070 | 0.5238 | 0.4940 |
| 0.2329 | 3.0 | 1605 | 0.6447 | 0.5162 |
| 0.1692 | 4.0 | 2140 | 0.7805 | 0.5332 |
| 0.1256 | 5.0 | 2675 | 0.8631 | 0.5411 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,999 |
aXhyra/demo_emotion_31415 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_emotion_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7348035780583043
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_emotion_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9818
- F1: 0.7348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.551070618629693e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.7431 | 0.6530 |
| No log | 2.0 | 408 | 0.6943 | 0.7333 |
| 0.5176 | 3.0 | 612 | 0.8456 | 0.7326 |
| 0.5176 | 4.0 | 816 | 0.9818 | 0.7348 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,765 |
aXhyra/demo_hate_42 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_hate_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7772939485986298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_hate_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- F1: 0.7773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.320702985778492e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 282 | 0.4850 | 0.7645 |
| 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 |
| 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 |
| 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,750 |
aXhyra/emotion_trained_31415 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_31415
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.719757533529152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9274
- F1: 0.7198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6177 | 0.7137 |
| No log | 2.0 | 408 | 0.7489 | 0.6761 |
| 0.5082 | 3.0 | 612 | 0.8233 | 0.7283 |
| 0.5082 | 4.0 | 816 | 0.9274 | 0.7198 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,774 |
aXhyra/emotion_trained_final | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: emotion_trained_final
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7469065445487402
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_final
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9349
- F1: 0.7469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.502523631581398e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9013 | 1.0 | 815 | 0.7822 | 0.6470 |
| 0.5008 | 2.0 | 1630 | 0.7142 | 0.7419 |
| 0.3684 | 3.0 | 2445 | 0.8621 | 0.7443 |
| 0.2182 | 4.0 | 3260 | 0.9349 | 0.7469 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,769 |
aXhyra/hate_trained_final | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: hate_trained_final
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7697890540753396
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_trained_final
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5543
- F1: 0.7698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.460503761236833e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.463 | 1.0 | 1125 | 0.5213 | 0.7384 |
| 0.3943 | 2.0 | 2250 | 0.5134 | 0.7534 |
| 0.3407 | 3.0 | 3375 | 0.5400 | 0.7666 |
| 0.3121 | 4.0 | 4500 | 0.5543 | 0.7698 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,760 |
aXhyra/test_emotion_trained_test | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: test_emotion_trained_test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: F1
type: f1
value: 0.7014611518188594
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_emotion_trained_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5866
- F1: 0.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.458132814624325e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 51 | 0.7877 | 0.5569 |
| No log | 2.0 | 102 | 0.6188 | 0.6937 |
| No log | 3.0 | 153 | 0.5969 | 0.7068 |
| No log | 4.0 | 204 | 0.5866 | 0.7015 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,779 |
aXhyra/test_hate_trained_test | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: test_hate_trained_test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7691585677255204
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_hate_trained_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1807
- F1: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.257754679724796e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4362 | 1.0 | 1125 | 0.5282 | 0.7369 |
| 0.3193 | 2.0 | 2250 | 0.6364 | 0.7571 |
| 0.1834 | 3.0 | 3375 | 1.0346 | 0.7625 |
| 0.0776 | 4.0 | 4500 | 1.1807 | 0.7692 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,768 |
abhishek/autonlp-ferd1-2652021 | [
"0",
"1"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-ferd1
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2652021
## Validation Metrics
- Loss: 0.3934604227542877
- Accuracy: 0.8411030860144452
- Precision: 0.8201550387596899
- Recall: 0.8076335877862595
- AUC: 0.8946767157983608
- F1: 0.8138461538461538
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-ferd1-2652021
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-ferd1-2652021", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-ferd1-2652021", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,051 |
abhishek/autonlp-imdb-roberta-base-3662644 | [
"neg",
"pos"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-imdb-roberta-base
co2_eq_emissions: 25.894117734124272
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 3662644
- CO2 Emissions (in grams): 25.894117734124272
## Validation Metrics
- Loss: 0.20277436077594757
- Accuracy: 0.92604
- Precision: 0.9560674830864092
- Recall: 0.89312
- AUC: 0.9814625504000001
- F1: 0.9235223559581421
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,163 |
abhishek/autonlp-toxic-new-30516963 | [
"False",
"True"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-toxic-new
co2_eq_emissions: 30.684995819386277
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 30516963
- CO2 Emissions (in grams): 30.684995819386277
## Validation Metrics
- Loss: 0.08340361714363098
- Accuracy: 0.9688222161294113
- Precision: 0.9102096627164995
- Recall: 0.7692604006163328
- AUC: 0.9859340458715813
- F1: 0.8338204592901879
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-toxic-new-30516963
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-toxic-new-30516963", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,156 |
adam-chell/tweet-sentiment-analyzer | [
"NEG",
"NEU",
"POS"
] | This model has been trained by fine-tuning a BERTweet sentiment classification model named "finiteautomata/bertweet-base-sentiment-analysis", on a labeled positive/negative dataset of tweets.
email : adam.chellaoui@epfl.ch | 225 |
adamlin/ml999_grinding_machine | [
"0",
"1"
] | Entry not found | 15 |
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-indic-bert | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-roberta-base | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
aditeyabaral/finetuned-sail2017-additionalpretrained-distilbert-base-cased | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
akahana/indonesia-emotion-distilbert | [
"SEDIH",
"MARAH",
"CINTA",
"TAKUT",
"BAHAGIA"
] | Entry not found | 15 |
akahana/indonesia-emotion-roberta-small | [
"SEDIH",
"MARAH",
"CINTA",
"TAKUT",
"BAHAGIA"
] | Entry not found | 15 |
akshara23/Terra-Classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
akshara23/distilbert-base-uncased-finetuned-cola | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.6290322580645161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0475
- Matthews Correlation: 0.6290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 16 | 1.3863 | 0.0 |
| No log | 2.0 | 32 | 1.2695 | 0.4503 |
| No log | 3.0 | 48 | 1.1563 | 0.6110 |
| No log | 4.0 | 64 | 1.0757 | 0.6290 |
| No log | 5.0 | 80 | 1.0475 | 0.6290 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,915 |
alecmullen/autonlp-group-classification-441411446 | [
"Beauty",
"Business/Finance",
"Faith",
"Fitness",
"Food",
"Gaming",
"Local",
"Marketplace",
"Memes",
"Music",
"None",
"Social",
"Sports",
"TV/Movies",
"Travel"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alecmullen/autonlp-data-group-classification
co2_eq_emissions: 0.4362732160754736
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,428 |
anelnurkayeva/autonlp-covid-432211280 | [
"misleading",
"news"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- anelnurkayeva/autonlp-data-covid
co2_eq_emissions: 8.898145050355591
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 432211280
- CO2 Emissions (in grams): 8.898145050355591
## Validation Metrics
- Loss: 0.12489336729049683
- Accuracy: 0.9520089285714286
- Precision: 0.9436443331246086
- Recall: 0.9747736093143596
- AUC: 0.9910066767410616
- F1: 0.958956411072224
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anelnurkayeva/autonlp-covid-432211280
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anelnurkayeva/autonlp-covid-432211280", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,161 |
anirudh21/albert-large-v2-finetuned-qnli | null | Entry not found | 15 |
anirudh21/albert-large-v2-finetuned-qqp | null | Entry not found | 15 |
anirudh21/albert-large-v2-finetuned-sst2 | null | Entry not found | 15 |
anirudh21/bert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5796941781913538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9664
- Matthews Correlation: 0.5797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5017 | 1.0 | 535 | 0.5252 | 0.4841 |
| 0.2903 | 2.0 | 1070 | 0.5550 | 0.4967 |
| 0.1839 | 3.0 | 1605 | 0.7295 | 0.5634 |
| 0.1132 | 4.0 | 2140 | 0.7762 | 0.5702 |
| 0.08 | 5.0 | 2675 | 0.9664 | 0.5797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,976 |
anirudh21/bert-base-uncased-finetuned-wnli | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6854
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6854 | 0.5634 |
| No log | 2.0 | 80 | 0.6983 | 0.3239 |
| No log | 3.0 | 120 | 0.6995 | 0.5352 |
| No log | 4.0 | 160 | 0.6986 | 0.5634 |
| No log | 5.0 | 200 | 0.6996 | 0.5634 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,844 |
anirudh21/distilbert-base-uncased-finetuned-qnli | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6064981949458483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8121
- Accuracy: 0.6065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6949 | 0.4874 |
| No log | 2.0 | 312 | 0.6596 | 0.5957 |
| No log | 3.0 | 468 | 0.7186 | 0.5812 |
| 0.6026 | 4.0 | 624 | 0.7727 | 0.6029 |
| 0.6026 | 5.0 | 780 | 0.8121 | 0.6065 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,867 |
anirudh21/distilbert-base-uncased-finetuned-rte | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6173285198555957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rte
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6661
- Accuracy: 0.6173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.6921 | 0.5162 |
| No log | 2.0 | 312 | 0.6661 | 0.6173 |
| No log | 3.0 | 468 | 0.7794 | 0.5632 |
| 0.5903 | 4.0 | 624 | 0.8832 | 0.5921 |
| 0.5903 | 5.0 | 780 | 0.9376 | 0.5921 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,865 |
ardauzunoglu/c_ovk | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: c_ovk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c_ovk
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2516
- Accuracy: 0.9249
- F1: 0.9044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4038 | 1.0 | 2462 | 0.2424 | 0.9117 | 0.8848 |
| 0.2041 | 2.0 | 4924 | 0.2323 | 0.9230 | 0.9028 |
| 0.1589 | 3.0 | 7386 | 0.2516 | 0.9249 | 0.9044 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,509 |
ashish-chouhan/xlm-roberta-base-finetuned-marc | [
"good",
"great",
"ok",
"poor",
"terrible"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0171
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1404 | 1.0 | 308 | 1.0720 | 0.5398 |
| 0.9805 | 2.0 | 616 | 1.0171 | 0.5310 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| 1,423 |
avneet/distilbert-base-uncased-finetuned-sst2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.9151376146788991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3651
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1902 | 1.0 | 4210 | 0.3102 | 0.9117 |
| 0.1293 | 2.0 | 8420 | 0.3672 | 0.9048 |
| 0.084 | 3.0 | 12630 | 0.3651 | 0.9151 |
| 0.0682 | 4.0 | 16840 | 0.3971 | 0.9037 |
| 0.0438 | 5.0 | 21050 | 0.4720 | 0.9117 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,872 |
baihaisheng/bert_finetuning_test | null | Entry not found | 15 |
banri/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5258663312307151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7523
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.533 | 1.0 | 535 | 0.5318 | 0.3887 |
| 0.3562 | 2.0 | 1070 | 0.5145 | 0.5100 |
| 0.2429 | 3.0 | 1605 | 0.6558 | 0.4888 |
| 0.1831 | 4.0 | 2140 | 0.7523 | 0.5259 |
| 0.1352 | 5.0 | 2675 | 0.8406 | 0.5182 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 2,000 |
benjaminbeilharz/distilbert-base-uncased-next-turn-classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
berkergurcay/1k-fineutuned-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
bitmorse/autonlp-ks-530615016 | [
"canceled",
"failed",
"live",
"successful"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bitmorse/autonlp-data-ks
co2_eq_emissions: 2.2247356264808964
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 530615016
- CO2 Emissions (in grams): 2.2247356264808964
## Validation Metrics
- Loss: 0.7859578132629395
- Accuracy: 0.676854818831649
- Macro F1: 0.3297126297995653
- Micro F1: 0.676854818831649
- Weighted F1: 0.6429522696884535
- Macro Precision: 0.33152557743856437
- Micro Precision: 0.676854818831649
- Weighted Precision: 0.6276125515413322
- Macro Recall: 0.33784302289888885
- Micro Recall: 0.676854818831649
- Weighted Recall: 0.676854818831649
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bitmorse/autonlp-ks-530615016
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,343 |
bitsanlp/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
blinjrm/finsent | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
boronbrown48/topic_generalFromOther_v1 | null | Entry not found | 15 |
boronbrown48/topic_otherTopics_v1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
boronbrown48/wangchanberta-sentiment-504-v4 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
bshlgrs/autonlp-classification_with_all_labellers-9532137 | [
"No",
"Unsure",
"Yes"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bshlgrs/autonlp-data-classification_with_all_labellers
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 9532137
## Validation Metrics
- Loss: 0.34556105732917786
- Accuracy: 0.8749890724713699
- Macro F1: 0.5243623959669343
- Micro F1: 0.8749890724713699
- Weighted F1: 0.8638030768409057
- Macro Precision: 0.5016762404900895
- Micro Precision: 0.8749890724713699
- Weighted Precision: 0.8547962562614184
- Macro Recall: 0.5529674694200845
- Micro Recall: 0.8749890724713699
- Weighted Recall: 0.8749890724713699
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification_with_all_labellers-9532137
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,375 |
caioamb/bert-base-uncased-finetuned-md | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-md
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-md
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2415 | 1.0 | 1044 | 0.2084 |
| 0.1244 | 2.0 | 2088 | 0.2903 |
| 0.0427 | 3.0 | 3132 | 0.3329 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
| 1,369 |
chisadi/nice-distilbert | [
"NICE_1",
"NICE_10",
"NICE_11",
"NICE_12",
"NICE_13",
"NICE_14",
"NICE_15",
"NICE_16",
"NICE_17",
"NICE_18",
"NICE_19",
"NICE_2",
"NICE_20",
"NICE_21",
"NICE_22",
"NICE_23",
"NICE_24",
"NICE_25",
"NICE_26",
"NICE_27",
"NICE_28",
"NICE_29",
"NICE_3",
"NICE_30",
"NICE_3... | Entry not found | 15 |
choondrise/emolve | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
... | Entry not found | 15 |
chrommium/helper-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_news_sents
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7224199288256228
- name: F1
type: f1
value: 0.5137303178348194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_news_sents
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9506
- Accuracy: 0.7224
- F1: 0.5137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 1.0045 | 0.6690 | 0.1388 |
| No log | 2.0 | 162 | 0.9574 | 0.6228 | 0.2980 |
| No log | 3.0 | 243 | 1.0259 | 0.6477 | 0.3208 |
| No log | 4.0 | 324 | 1.1262 | 0.6619 | 0.4033 |
| No log | 5.0 | 405 | 1.3377 | 0.6299 | 0.3909 |
| No log | 6.0 | 486 | 1.5716 | 0.6868 | 0.3624 |
| 0.6085 | 7.0 | 567 | 1.6286 | 0.6762 | 0.4130 |
| 0.6085 | 8.0 | 648 | 1.6450 | 0.6940 | 0.4775 |
| 0.6085 | 9.0 | 729 | 1.7108 | 0.7224 | 0.4920 |
| 0.6085 | 10.0 | 810 | 1.8792 | 0.7046 | 0.5028 |
| 0.6085 | 11.0 | 891 | 1.8670 | 0.7153 | 0.4992 |
| 0.6085 | 12.0 | 972 | 1.8856 | 0.7153 | 0.4934 |
| 0.0922 | 13.0 | 1053 | 1.9506 | 0.7224 | 0.5137 |
| 0.0922 | 14.0 | 1134 | 2.0363 | 0.7189 | 0.4761 |
| 0.0922 | 15.0 | 1215 | 2.0601 | 0.7224 | 0.5053 |
| 0.0922 | 16.0 | 1296 | 2.0813 | 0.7153 | 0.5038 |
| 0.0922 | 17.0 | 1377 | 2.0960 | 0.7189 | 0.5065 |
| 0.0922 | 18.0 | 1458 | 2.1060 | 0.7224 | 0.5098 |
| 0.0101 | 19.0 | 1539 | 2.1153 | 0.7260 | 0.5086 |
| 0.0101 | 20.0 | 1620 | 2.1187 | 0.7260 | 0.5086 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 3,039 |
chrommium/rubert-base-cased-sentence-finetuned-sent_in_ru | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_ru
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3503
- Accuracy: 0.6884
- F1: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 441 | 0.7397 | 0.6630 | 0.6530 |
| 0.771 | 2.0 | 882 | 0.7143 | 0.6909 | 0.6905 |
| 0.5449 | 3.0 | 1323 | 0.8385 | 0.6897 | 0.6870 |
| 0.3795 | 4.0 | 1764 | 0.8851 | 0.6939 | 0.6914 |
| 0.3059 | 5.0 | 2205 | 1.0728 | 0.6933 | 0.6953 |
| 0.2673 | 6.0 | 2646 | 1.0673 | 0.7060 | 0.7020 |
| 0.2358 | 7.0 | 3087 | 1.5200 | 0.6830 | 0.6829 |
| 0.2069 | 8.0 | 3528 | 1.3439 | 0.7024 | 0.7016 |
| 0.2069 | 9.0 | 3969 | 1.3545 | 0.6830 | 0.6833 |
| 0.1724 | 10.0 | 4410 | 1.5591 | 0.6927 | 0.6902 |
| 0.1525 | 11.0 | 4851 | 1.6425 | 0.6818 | 0.6823 |
| 0.131 | 12.0 | 5292 | 1.8999 | 0.6836 | 0.6775 |
| 0.1253 | 13.0 | 5733 | 1.6959 | 0.6884 | 0.6877 |
| 0.1132 | 14.0 | 6174 | 1.9561 | 0.6776 | 0.6803 |
| 0.0951 | 15.0 | 6615 | 2.0356 | 0.6763 | 0.6754 |
| 0.1009 | 16.0 | 7056 | 1.7995 | 0.6842 | 0.6741 |
| 0.1009 | 17.0 | 7497 | 2.0638 | 0.6884 | 0.6811 |
| 0.0817 | 18.0 | 7938 | 2.1686 | 0.6884 | 0.6859 |
| 0.0691 | 19.0 | 8379 | 2.0874 | 0.6878 | 0.6889 |
| 0.0656 | 20.0 | 8820 | 2.1772 | 0.6854 | 0.6817 |
| 0.0652 | 21.0 | 9261 | 2.4018 | 0.6872 | 0.6896 |
| 0.0608 | 22.0 | 9702 | 2.2074 | 0.6770 | 0.6656 |
| 0.0677 | 23.0 | 10143 | 2.2101 | 0.6848 | 0.6793 |
| 0.0559 | 24.0 | 10584 | 2.2920 | 0.6848 | 0.6835 |
| 0.0524 | 25.0 | 11025 | 2.3503 | 0.6884 | 0.6875 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 3,185 |
chrommium/sbert_large-finetuned-sent_in_news_sents_3lab | [
"LABEL_-1",
"LABEL_0",
"LABEL_1"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sbert_large-finetuned-sent_in_news_sents_3lab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sbert_large-finetuned-sent_in_news_sents_3lab
This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9443
- Accuracy: 0.8580
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 |
| 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 |
| 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 |
| 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 |
| 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 |
| 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 |
| 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 |
| 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 |
| 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 |
| 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 |
| 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,144 |
damlab/HIV_PR_resist | [
"FPV",
"IDV",
"NFV",
"SQV"
] |
---
license: mit
---
# HIV_PR_resist model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Protease-Resistance model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict whether an HIV protease sequence will be resistant to certain protease inhibitors. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV protease sequences from the [Stanford HIV Genotype-Phenotype Database](https://hivdb.stanford.edu/pages/genotype-phenotype.html), allowing even more precise prediction protease inhibitor resistance than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Protease-Resistance model is intended to predict the likelihood that an HIV protease sequence will be resistant to protease inhibitors. The protease gene is responsible for cleaving viral proteins into their active states, and as such is an ideal target for antiretroviral therapy. Annotation programs designed to predict and identify protease resistance using known mutations already exist, however with varied results. The HIV-BERT-Protease-Resistance model is designed to provide an alternative, NLP-based mechanism for predicting resistance mutations when provided with an HIV protease sequence.
## Intended Uses & Limitations
This tool can be used as a predictor of protease resistance mutations within an HIV genomic sequence. It should not be considered a clinical diagnostic tool.
## How to use
*Prediction example of protease sequences*
## Training Data
This model was trained using the [damlab/HIV-PI dataset](https://huggingface.co/datasets/damlab/HIV_PI) using the 0th fold. The dataset consists of 1959 sequences (approximately 99 tokens each) extracted from the Stanford HIV Genotype-Phenotype Database.
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be resistant to multiple drugs) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed] | 3,433 |
darkzara/results | null | Entry not found | 15 |
dee4hf/autonlp-shajBERT-38639804 | [
"communal_attack",
"hate_speech",
"inciteful",
"personal_attack",
"political_comment",
"religious",
"religious hatred",
"religious_hatred",
"suicidal_attack"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- dee4hf/autonlp-data-shajBERT
co2_eq_emissions: 11.98841452241473
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38639804
- CO2 Emissions (in grams): 11.98841452241473
## Validation Metrics
- Loss: 0.421400249004364
- Accuracy: 0.86783988957902
- Macro F1: 0.8669477050676501
- Micro F1: 0.86783988957902
- Weighted F1: 0.86694770506765
- Macro Precision: 0.867606300132228
- Micro Precision: 0.86783988957902
- Weighted Precision: 0.8676063001322278
- Macro Recall: 0.86783988957902
- Micro Recall: 0.86783988957902
- Weighted Recall: 0.86783988957902
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dee4hf/autonlp-shajBERT-38639804
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,341 |
devkushal75/medtextclassifier | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-0k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-100k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-1500k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-1800k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-200k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-20k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-400k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.