modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
yobi/klue-roberta-base-ynat
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6" ]
0
ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-chinese-amazon_zh_20000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-chinese-amazon_zh_20000 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1518 - Accuracy: 0.5092 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.196 | 1.0 | 1250 | 1.1518 | 0.5092 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
1,346
ali2066/finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
1,788
atlantis/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9195 - name: F1 type: f1 value: 0.9197362586063258 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2272 - Accuracy: 0.9195 - F1: 0.9197 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.83 | 1.0 | 250 | 0.3238 | 0.9005 | 0.8983 | | 0.2503 | 2.0 | 500 | 0.2272 | 0.9195 | 0.9197 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,807
Hieu/scam-detection
null
Entry not found
15
bishnu/finetuning-sentiment-model-3000-samples
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.86 - name: F1 type: f1 value: 0.8556701030927835 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5523 - Accuracy: 0.86 - F1: 0.8557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
1,504
artemis13fowl/bert-base-uncased-imdb
null
## bert-base-uncased finetuned on IMDB dataset Evaluation set was created by taking 1000 samples from test set ``` DatasetDict({ train: Dataset({ features: ['text', 'label'], num_rows: 25000 }) dev: Dataset({ features: ['text', 'label'], num_rows: 1000 }) test: Dataset({ features: ['text', 'label'], num_rows: 24000 }) }) ``` ## Parameters ``` max_sequence_length = 128 batch_size = 32 eval_steps = 100 learning_rate=2e-05 num_train_epochs=5 early_stopping_patience = 10 ``` ## Training Run ``` [2700/3910 1:11:43 < 32:09, 0.63 it/s, Epoch 3/5] Step Training Loss Validation Loss Accuracy Precision Recall F1 Runtime Samples Per Second 100 No log 0.371974 0.845000 0.798942 0.917004 0.853911 15.256900 65.544000 200 No log 0.349631 0.850000 0.873913 0.813765 0.842767 15.288600 65.408000 300 No log 0.359376 0.845000 0.869281 0.807692 0.837356 15.303900 65.343000 400 No log 0.307613 0.870000 0.851351 0.892713 0.871542 15.358400 65.111000 500 0.364500 0.309362 0.856000 0.807018 0.931174 0.864662 15.326100 65.248000 600 0.364500 0.302709 0.867000 0.881607 0.844130 0.862461 15.324400 65.255000 700 0.364500 0.300102 0.871000 0.894168 0.838057 0.865204 15.474900 64.621000 800 0.364500 0.383784 0.866000 0.833333 0.910931 0.870406 15.380100 65.019000 900 0.364500 0.309934 0.874000 0.881743 0.860324 0.870902 15.358900 65.109000 1000 0.254600 0.332236 0.872000 0.894397 0.840081 0.866388 15.442700 64.756000 1100 0.254600 0.330807 0.871000 0.877847 0.858300 0.867963 15.410900 64.889000 1200 0.254600 0.352724 0.872000 0.925581 0.805668 0.861472 15.272800 65.476000 1300 0.254600 0.278529 0.881000 0.891441 0.864372 0.877698 15.408200 64.900000 1400 0.254600 0.291371 0.878000 0.854962 0.906883 0.880157 15.427400 64.820000 1500 0.208400 0.324827 0.869000 0.904232 0.821862 0.861082 15.338600 65.195000 1600 0.208400 0.377024 0.884000 0.898734 0.862348 0.880165 15.414500 64.874000 1700 0.208400 0.375274 0.885000 0.881288 0.886640 0.883956 15.367200 65.073000 1800 0.208400 0.378904 0.880000 0.877016 0.880567 0.878788 15.363900 65.088000 1900 0.208400 0.410517 0.874000 0.866534 0.880567 0.873494 15.324700 65.254000 2000 0.130800 0.404030 0.876000 0.888655 0.856275 0.872165 15.414200 64.875000 2100 0.130800 0.390763 0.883000 0.882353 0.880567 0.881459 15.341500 65.183000 2200 0.130800 0.417967 0.880000 0.875502 0.882591 0.879032 15.351300 65.141000 2300 0.130800 0.390974 0.883000 0.898520 0.860324 0.879007 15.396100 64.952000 2400 0.130800 0.479739 0.874000 0.856589 0.894737 0.875248 15.460500 64.681000 2500 0.098400 0.473215 0.875000 0.883576 0.860324 0.871795 15.392200 64.968000 2600 0.098400 0.532294 0.872000 0.889362 0.846154 0.867220 15.364100 65.087000 2700 0.098400 0.536664 0.881000 0.880325 0.878543 0.879433 15.351100 65.142000 TrainOutput(global_step=2700, training_loss=0.2004435383832013, metrics={'train_runtime': 4304.5331, 'train_samples_per_second': 0.908, 'total_flos': 7258763970957312, 'epoch': 3.45}) ``` ## Classification Report ``` precision recall f1-score support 0 0.90 0.87 0.89 11994 1 0.87 0.90 0.89 12006 accuracy 0.89 24000 macro avg 0.89 0.89 0.89 24000 weighted avg 0.89 0.89 0.89 24000 ```
3,601
Ramu/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9262005126757141 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.926 - F1: 0.9262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8112 | 1.0 | 250 | 0.3147 | 0.903 | 0.8992 | | 0.2454 | 2.0 | 500 | 0.2167 | 0.926 | 0.9262 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.8.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
1,804
kapilchauhan/efl-finetuned-cola
null
--- tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: efl-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.6097804486545971 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # efl-finetuned-cola This model is a fine-tuned version of [nghuyong/ernie-2.0-en](https://huggingface.co/nghuyong/ernie-2.0-en) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4688 - Matthews Correlation: 0.6098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 134 | 0.4795 | 0.5403 | | No log | 2.0 | 268 | 0.4061 | 0.6082 | | No log | 3.0 | 402 | 0.4688 | 0.6098 | | 0.2693 | 4.0 | 536 | 0.5332 | 0.6050 | | 0.2693 | 5.0 | 670 | 0.6316 | 0.6098 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
1,936
nikolamilosevic/distil_bert_uncased-finetuned-relations
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - f1 model-index: - name: distil_bert_uncased-finetuned-relations results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil_bert_uncased-finetuned-relations This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4191 - Accuracy: 0.8866 - Prec: 0.8771 - Recall: 0.8866 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:| | 1.1823 | 1.0 | 232 | 0.5940 | 0.8413 | 0.8273 | 0.8413 | 0.8224 | | 0.4591 | 2.0 | 464 | 0.4600 | 0.8607 | 0.8539 | 0.8607 | 0.8555 | | 0.3106 | 3.0 | 696 | 0.4160 | 0.8812 | 0.8763 | 0.8812 | 0.8785 | | 0.246 | 4.0 | 928 | 0.4113 | 0.8834 | 0.8766 | 0.8834 | 0.8796 | | 0.2013 | 5.0 | 1160 | 0.4191 | 0.8866 | 0.8771 | 0.8866 | 0.8808 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.13.0.dev20220614 - Datasets 2.2.2 - Tokenizers 0.11.6
1,884
RaghuramKol/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9271888946173477 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2218 - Accuracy: 0.927 - F1: 0.9272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8487 | 1.0 | 250 | 0.3274 | 0.906 | 0.9030 | | 0.2595 | 2.0 | 500 | 0.2218 | 0.927 | 0.9272 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,805
cb2-kai/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.86 - name: F1 type: f1 value: 0.8679245283018867 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3568 - Accuracy: 0.86 - F1: 0.8679 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
1,505
antonio-artur/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9260113300845928 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2280 - Accuracy: 0.926 - F1: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8646 | 1.0 | 250 | 0.3326 | 0.9045 | 0.9009 | | 0.2663 | 2.0 | 500 | 0.2280 | 0.926 | 0.9260 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu102 - Datasets 2.0.0 - Tokenizers 0.11.6
1,804
dapang/distilbert-base-uncased-finetuned-moral-action
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-moral-action results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-moral-action This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4632 - Accuracy: 0.7912 - F1: 0.7912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.716387809233253e-05 - train_batch_size: 2000 - eval_batch_size: 2000 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 10 | 0.5406 | 0.742 | 0.7399 | | No log | 2.0 | 20 | 0.4810 | 0.7628 | 0.7616 | | No log | 3.0 | 30 | 0.4649 | 0.786 | 0.7856 | | No log | 4.0 | 40 | 0.4600 | 0.7916 | 0.7916 | | No log | 5.0 | 50 | 0.4632 | 0.7912 | 0.7912 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1 - Datasets 2.0.0 - Tokenizers 0.11.0
1,746
Shadman-Rohan/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9247907524762314 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2083 - Accuracy: 0.9245 - F1: 0.9248 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7794 | 1.0 | 250 | 0.2870 | 0.9115 | 0.9099 | | 0.2311 | 2.0 | 500 | 0.2083 | 0.9245 | 0.9248 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
1,806
palakagl/distilbert_MultiClass_TextClassification
[ "alarm_query", "alarm_remove", "alarm_set", "audio_volume_down", "audio_volume_mute", "audio_volume_up", "calendar_query", "calendar_remove", "calendar_set", "cooking_recipe", "datetime_convert", "datetime_query", "email_addcontact", "email_query", "email_querycontact", "email_sendemai...
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - palakagl/autotrain-data-PersonalAssitant co2_eq_emissions: 2.258363491829382 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 717221781 - CO2 Emissions (in grams): 2.258363491829382 ## Validation Metrics - Loss: 0.38660314679145813 - Accuracy: 0.9042081949058693 - Macro F1: 0.9079200295131094 - Micro F1: 0.9042081949058692 - Weighted F1: 0.9052766730963512 - Macro Precision: 0.9116101664087508 - Micro Precision: 0.9042081949058693 - Weighted Precision: 0.9097680514456175 - Macro Recall: 0.9080246002936301 - Micro Recall: 0.9042081949058693 - Weighted Recall: 0.9042081949058693 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/palakagl/autotrain-PersonalAssitant-717221781 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("palakagl/autotrain-PersonalAssitant-717221781", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("palakagl/autotrain-PersonalAssitant-717221781", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,419
Xenova/sponsorblock-classifier-v2
[ "INTERACTION", "NONE", "SELFPROMO", "SPONSOR" ]
--- tags: - text-classification - generic library_name: generic widget: - text: 'This video is sponsored by squarespace' example_title: Sponsor - text: 'Check out the merch at linustechtips.com' example_title: Unpaid/self promotion - text: "Don't forget to like, comment and subscribe" example_title: Interaction reminder - text: 'pqh4LfPeCYs,824.695,826.267,826.133,829.876,835.933,927.581' example_title: Extract text from video ---
443
dfsj/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9222074564200887 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.922 - F1: 0.9222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8116 | 1.0 | 250 | 0.3076 | 0.9035 | 0.9013 | | 0.2426 | 2.0 | 500 | 0.2170 | 0.922 | 0.9222 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.0.0 - Tokenizers 0.12.1
1,804
Intel/camembert-base-mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: camembert-base-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8504901960784313 - name: F1 type: f1 value: 0.8927943760984183 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-base-mrpc This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4286 - Accuracy: 0.8505 - F1: 0.8928 - Combined Score: 0.8716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.1.0 - Tokenizers 0.11.6
1,501
Intel/electra-small-discriminator-mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: electra-small-discriminator-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8529411764705882 - name: F1 type: f1 value: 0.8983050847457628 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-discriminator-mrpc This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.3909 - Accuracy: 0.8529 - F1: 0.8983 - Combined Score: 0.8756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu102 - Datasets 2.1.0 - Tokenizers 0.11.6
1,574
UT/MULTIBRT
null
Entry not found
15
charlieoneill/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.938 - name: F1 type: f1 value: 0.9383526007023721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1622 - Accuracy: 0.938 - F1: 0.9384 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0917 | 1.0 | 250 | 0.1935 | 0.9305 | 0.9306 | | 0.0719 | 2.0 | 500 | 0.1622 | 0.938 | 0.9384 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
1,797
Parsa/Chemical_explosion_classification
null
For testing it yourself, the easiest way is using the colab link below. Github repo: https://github.com/mephisto121/Chemical_explosion_classification [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1GQmh1g2bRdqgQCnM6b_iY-eAQCRfhMJP?usp=sharing)
313
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
[ "NEGATIVE", "POSITIVE" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7321 - Precision: 0.9795 - Recall: 0.7277 - F1: 0.835 - Accuracy: 0.7208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 130 | 0.3755 | 0.8521 | 0.9910 | 0.9163 | 0.8529 | | No log | 2.0 | 260 | 0.3352 | 0.8875 | 0.9638 | 0.9241 | 0.8713 | | No log | 3.0 | 390 | 0.3370 | 0.8918 | 0.9321 | 0.9115 | 0.8529 | | 0.4338 | 4.0 | 520 | 0.3415 | 0.8957 | 0.9321 | 0.9135 | 0.8566 | | 0.4338 | 5.0 | 650 | 0.3416 | 0.8918 | 0.9321 | 0.9115 | 0.8529 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
2,044
tomhosking/bert-base-uncased-debiased-nli
[ "ENTAILMENT", "NEUTRAL", "CONTRADICTION" ]
--- license: apache-2.0 widget: - text: "[CLS] Rover is a dog. [SEP] Rover is a cat. [SEP]" --- `bert-base-uncased`, fine tuned on the debiased NLI dataset from "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets", Wu et al., 2022. Tuned using the code at https://github.com/jimmycode/gen-debiased-nli
342
pile-of-law/distilbert-base-uncased-finetuned-eoir_privacy
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eoir_privacy metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-eoir_privacy results: - task: name: Text Classification type: text-classification dataset: name: eoir_privacy type: eoir_privacy args: all metrics: - name: Accuracy type: accuracy value: 0.9052835051546392 - name: F1 type: f1 value: 0.8088426527958388 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-eoir_privacy This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the eoir_privacy dataset. It achieves the following results on the evaluation set: - Loss: 0.3681 - Accuracy: 0.9053 - F1: 0.8088 ## Model description Model predicts whether to mask names as pseudonyms in any text. Input format should be a paragraph with names masked. It will then output whether to use a pseudonym because the EOIR courts would not allow such private/sensitive information to become public unmasked. ## Intended uses & limitations This is a minimal privacy standard and will likely not work on out-of-distribution data. ## Training and evaluation data We train on the EOIR Privacy dataset and evaluate further using sensitivity analyses. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 395 | 0.3053 | 0.8789 | 0.7432 | | 0.3562 | 2.0 | 790 | 0.2857 | 0.8976 | 0.7883 | | 0.2217 | 3.0 | 1185 | 0.3358 | 0.8905 | 0.7550 | | 0.1509 | 4.0 | 1580 | 0.3505 | 0.9040 | 0.8077 | | 0.1509 | 5.0 | 1975 | 0.3681 | 0.9053 | 0.8088 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1 ### Citation ``` @misc{hendersonkrass2022pileoflaw, url = {https://arxiv.org/abs/2207.00220}, author = {Henderson*, Peter and Krass*, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.}, title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset}, publisher = {arXiv}, year = {2022} } ```
2,831
allermat/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9233300539962602 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2244 - Accuracy: 0.923 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8412 | 1.0 | 250 | 0.3186 | 0.904 | 0.9022 | | 0.2501 | 2.0 | 500 | 0.2244 | 0.923 | 0.9233 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
1,798
Paleontolog/bert_sentence_classifier
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-48
null
Entry not found
15
Jeevesh8/6ep_bert_ft_cola-73
null
Entry not found
15
Leizhang/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.12.1
1,081
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_42
[ "0", "1", "2" ]
Entry not found
15
priyamm/autotrain-KeywordExtraction-882328335
[ "neg", "pos" ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - priyamm/autotrain-data-KeywordExtraction co2_eq_emissions: 0.21373468108000182 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 882328335 - CO2 Emissions (in grams): 0.21373468108000182 ## Validation Metrics - Loss: 0.2641160488128662 - Accuracy: 0.9128 - Precision: 0.9444444444444444 - Recall: 0.8772 - AUC: 0.9709556000000001 - F1: 0.9095810866860223 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/priyamm/autotrain-KeywordExtraction-882328335 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("priyamm/autotrain-KeywordExtraction-882328335", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("priyamm/autotrain-KeywordExtraction-882328335", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,184
DuboiJ/finetuning-sentiment-model-3000-samples
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8633333333333333 - name: F1 type: f1 value: 0.8637873754152824 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3211 - Accuracy: 0.8633 - F1: 0.8638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,521
arcAman07/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9240598378254522 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8294 | 1.0 | 250 | 0.3209 | 0.9025 | 0.9001 | | 0.2536 | 2.0 | 500 | 0.2222 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,804
Ritvik19/autotrain-sentiment_polarity-918130222
[ "0.0", "1.0" ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - Ritvik19/autotrain-data-sentiment_polarity co2_eq_emissions: 4.280488237750762 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 918130222 - CO2 Emissions (in grams): 4.280488237750762 ## Validation Metrics - Loss: 0.13608604669570923 - Accuracy: 0.9504804036293305 - Precision: 0.9792047060317863 - Recall: 0.9647185343057701 - AUC: 0.9791895292939061 - F1: 0.9719076444852428 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ritvik19/autotrain-sentiment_polarity-918130222 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ritvik19/autotrain-sentiment_polarity-918130222", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ritvik19/autotrain-sentiment_polarity-918130222", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,213
CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254
[ "Applied Science", "Arts", "Belief & Thought", "Commerce & Finance", "History", "Imaginative", "Natural & Pure Science", "Social Science " ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - CH0KUN/autotrain-data-TNC_Domain_WangchanBERTa co2_eq_emissions: 25.144394918865913 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 921730254 - CO2 Emissions (in grams): 25.144394918865913 ## Validation Metrics - Loss: 0.7080970406532288 - Accuracy: 0.7775925925925926 - Macro F1: 0.7758012615987406 - Micro F1: 0.7775925925925925 - Weighted F1: 0.7758012615987406 - Macro Precision: 0.7833307663368776 - Micro Precision: 0.7775925925925926 - Weighted Precision: 0.7833307663368777 - Macro Recall: 0.7775925925925926 - Micro Recall: 0.7775925925925926 - Weighted Recall: 0.7775925925925926 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("CH0KUN/autotrain-TNC_Domain_WangchanBERTa-921730254", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,445
Jherb/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8666666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3063 - Accuracy: 0.8667 - F1: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,521
Marvin67/distil_covid
[ "Bio-weapon", "COVID-19 cases and deaths statistics", "Chinese government (Chinese communist party - CCP)", "Other", "Wet market and eating habits", "Wuhan virus lab" ]
--- license: other ---
23
Jeevesh8/std_pnt_04_feather_berts-13
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Jeevesh8/std_pnt_04_feather_berts-11
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
Alireza1044/mobilebert_cola
[ "acceptable", "unacceptable" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5277813760438573 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cola This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6337 - Matthews Correlation: 0.5278 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
1,443
mosesju/distilbert-base-uncased-finetuned-news
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - ag_news metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-news results: - task: name: Text Classification type: text-classification dataset: name: ag_news type: ag_news args: default metrics: - name: Accuracy type: accuracy value: 0.9388157894736842 - name: F1 type: f1 value: 0.9388275184627893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.2117 - Accuracy: 0.9388 - F1: 0.9388 ## Model description This model is intended to categorize news headlines into one of four categories; World, Sports, Science & Technology, or Business ## Intended uses & limitations The model is limited by the training data it used. If you use the model with a news story that falls outside of the four intended categories, it produces quite confused results. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2949 | 1.0 | 3750 | 0.2501 | 0.9262 | 0.9261 | | 0.1569 | 2.0 | 7500 | 0.2117 | 0.9388 | 0.9388 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
2,073
corgito/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8712871287128714 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3105 - Accuracy: 0.87 - F1: 0.8713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.0 - Tokenizers 0.12.1
1,505
amissier/distilbert-amazon-shoe-reviews
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-amazon-shoe-reviews results: - task: type: text-classification name: Text Classification dataset: type: amazon_us_reviews name: Amazon US reviews split: Shoes metrics: - type: accuracy value: 0.48 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-amazon-shoe-reviews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3445 - Accuracy: 0.48 - F1: [0. 0. 0. 0. 0.64864865] - Precision: [0. 0. 0. 0. 0.48] - Recall: [0. 0. 0. 0. 1.] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------:|:----------------:| | No log | 1.0 | 15 | 1.3445 | 0.48 | [0. 0. 0. 0. 0.64864865] | [0. 0. 0. 0. 0.48] | [0. 0. 0. 0. 1.] | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
2,069
Alireza1044/MobileBERT_Theseus-mnli
[ "contradiction", "entailment", "neutral" ]
Entry not found
15
Sayan01/tiny-bert-qnli-distilled
[ "entailment", "not_entailment" ]
Entry not found
15
zunicd/finetuning-sentiment-model-3000-samples
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8741721854304636 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3349 - Accuracy: 0.8733 - F1: 0.8742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,521
dminiotas05/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1027 - Accuracy: 0.5447 - F1: 0.4832 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1848 | 1.0 | 188 | 1.1199 | 0.538 | 0.4607 | | 1.0459 | 2.0 | 376 | 1.1027 | 0.5447 | 0.4832 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,504
shubhamitra/distilbert-base-uncased-finetuned-toxic-classification
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-toxic-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-toxic-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 123 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.0 | 498 | 0.0419 | 0.7754 | 0.8736 | 0.9235 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Tokenizers 0.12.1
1,479
upsalite/xlm-roberta-base-finetuned-emotion-2-labels
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm-roberta-base-finetuned-emotion-2-labels results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-emotion-2-labels This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1200 - Accuracy: 0.835 - F1: 0.8335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6973 | 1.0 | 25 | 0.6917 | 0.5 | 0.3333 | | 0.6626 | 2.0 | 50 | 0.5690 | 0.745 | 0.7431 | | 0.5392 | 3.0 | 75 | 0.4598 | 0.76 | 0.7591 | | 0.4253 | 4.0 | 100 | 0.4313 | 0.8 | 0.7993 | | 0.2973 | 5.0 | 125 | 0.5872 | 0.795 | 0.7906 | | 0.2327 | 6.0 | 150 | 0.4951 | 0.805 | 0.8049 | | 0.173 | 7.0 | 175 | 0.6095 | 0.815 | 0.8142 | | 0.1159 | 8.0 | 200 | 0.6523 | 0.825 | 0.8246 | | 0.0791 | 9.0 | 225 | 0.6651 | 0.825 | 0.8243 | | 0.0557 | 10.0 | 250 | 0.8242 | 0.83 | 0.8286 | | 0.0643 | 11.0 | 275 | 0.6710 | 0.825 | 0.8243 | | 0.0507 | 12.0 | 300 | 0.7729 | 0.83 | 0.8294 | | 0.0239 | 13.0 | 325 | 0.8618 | 0.83 | 0.8283 | | 0.0107 | 14.0 | 350 | 0.9683 | 0.835 | 0.8335 | | 0.0233 | 15.0 | 375 | 1.0850 | 0.825 | 0.8227 | | 0.0134 | 16.0 | 400 | 0.9801 | 0.835 | 0.8343 | | 0.0122 | 17.0 | 425 | 1.0427 | 0.845 | 0.8439 | | 0.0046 | 18.0 | 450 | 1.0867 | 0.84 | 0.8387 | | 0.0038 | 19.0 | 475 | 1.0950 | 0.83 | 0.8289 | | 0.002 | 20.0 | 500 | 1.1200 | 0.835 | 0.8335 | ### Framework versions - Transformers 4.19.0 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.12.1
2,764
michauhl/distilbert-base-uncased-finetuned-emotion
[ "sadness", "joy", "love", "anger", "fear", "surprise" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9405 - name: F1 type: f1 value: 0.9404976918144629 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1891 - Accuracy: 0.9405 - F1: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1344 | 1.0 | 1000 | 0.1760 | 0.933 | 0.9331 | | 0.0823 | 2.0 | 2000 | 0.1891 | 0.9405 | 0.9405 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0.post202 - Datasets 2.3.2 - Tokenizers 0.11.0
1,808
jhonparra18/bert-base-cased-fine-tuning-cvs-hf-studio-name
[ "Agile Delivery", "Business Hacking", "Cloud Ops", "Data and AI", "Design", "Digital Marketing", "Digital eXperience Platforms", "Enterprise Apps", "Gaming", "Generic", "Process Optimization", "Product Acceleration", "Quality Engineering", "Salesforce", "Scalable Platforms", "Staff Gen...
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: bert-base-cased-fine-tuning-cvs-hf-studio-name results: [] widget: - text: "Egresado de la carrera Ingeniería en Computación Conocimientos de lenguajes HTML, CSS, Javascript y MySQL. Experiencia trabajando en ámbitos de redes de pequeña y mediana escala. Inglés Hablado nivel básico, escrito nivel intermedio.HTML, CSS y JavaScript. Realidad aumentada. Lenguaje R. HTML5, JavaScript y Nodejs" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-fine-tuning-cvs-hf-studio-name This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2601 - Accuracy: 0.6500 - F1: 0.6500 - Precision: 0.6500 - Recall: 0.6500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.4407 | 0.24 | 500 | 1.5664 | 0.5528 | 0.5528 | 0.5528 | 0.5528 | | 1.3055 | 0.49 | 1000 | 1.4891 | 0.5745 | 0.5745 | 0.5745 | 0.5745 | | 1.373 | 0.73 | 1500 | 1.3634 | 0.6180 | 0.6180 | 0.6180 | 0.6180 | | 1.3621 | 0.98 | 2000 | 1.3768 | 0.6139 | 0.6139 | 0.6139 | 0.6139 | | 1.1677 | 1.22 | 2500 | 1.3330 | 0.6395 | 0.6395 | 0.6395 | 0.6395 | | 1.0826 | 1.47 | 3000 | 1.4003 | 0.6146 | 0.6146 | 0.6146 | 0.6146 | | 1.0968 | 1.71 | 3500 | 1.2601 | 0.6500 | 0.6500 | 0.6500 | 0.6500 | | 1.0896 | 1.96 | 4000 | 1.2826 | 0.6564 | 0.6564 | 0.6564 | 0.6564 | | 0.8572 | 2.2 | 4500 | 1.3254 | 0.6569 | 0.6569 | 0.6569 | 0.6569 | | 0.822 | 2.44 | 5000 | 1.3024 | 0.6571 | 0.6571 | 0.6571 | 0.6571 | | 0.8022 | 2.69 | 5500 | 1.2971 | 0.6608 | 0.6608 | 0.6608 | 0.6608 | | 0.834 | 2.93 | 6000 | 1.2900 | 0.6630 | 0.6630 | 0.6630 | 0.6630 | ### Framework versions - Transformers 4.19.0 - Pytorch 1.8.2+cu111 - Datasets 1.6.2 - Tokenizers 0.12.1
2,894
jhonparra18/bert-base-cased-cv-studio_name-medium
[ "Agile Delivery", "Business Hacking", "Cloud Ops", "Data and AI", "Design", "Digital Marketing", "Digital eXperience Platforms", "Enterprise Apps", "Gaming", "Generic", "Process Optimization", "Product Acceleration", "Quality Engineering", "Salesforce", "Scalable Platforms", "Staff Gen...
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-cased-cv-studio_name-medium results: [] widget: - text: "Egresado de la carrera Ingeniería en Computación Conocimientos de lenguajes HTML, CSS, Javascript y MySQL. Experiencia trabajando en ámbitos de redes de pequeña y mediana escala. Inglés Hablado nivel básico, escrito nivel intermedio.HTML, CSS y JavaScript. Realidad aumentada. Lenguaje R. HTML5, JavaScript y Nodejs" - text: "mi nombre es Ivan Ducales Marquez, hago de subpresidente en la republica de Colombia. tengo experiencia en seguir órdenes de mis patrocinadores y repartir los recursos del país a empresarios corruptos" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-cv-studio_name-medium This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3310 - F1 Micro: 0.6388 - F1 Macro: 0.5001 ## Model description Predicts a studio name based on a CV text ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Recall Micro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:------------:| | 1.4139 | 0.98 | 1000 | 1.3831 | 0.6039 | 0.6039 | 0.4188 | 0.6039 | 0.6039 | | 1.1561 | 1.96 | 2000 | 1.2386 | 0.6554 | 0.6554 | 0.4743 | 0.6554 | 0.6554 | | 0.9183 | 2.93 | 3000 | 1.2201 | 0.6576 | 0.6576 | 0.5011 | 0.6576 | 0.6576 | | 0.677 | 3.91 | 4000 | 1.3478 | 0.6442 | 0.6442 | 0.5206 | 0.6442 | 0.6442 | | 0.4857 | 4.89 | 5000 | 1.4765 | 0.6393 | 0.6393 | 0.5215 | 0.6393 | 0.6393 | | 0.3318 | 5.87 | 6000 | 1.6924 | 0.6442 | 0.6442 | 0.5024 | 0.6442 | 0.6442 | | 0.2273 | 6.84 | 7000 | 1.8645 | 0.6444 | 0.6444 | 0.5060 | 0.6444 | 0.6444 | | 0.1396 | 7.82 | 8000 | 2.1143 | 0.6381 | 0.6381 | 0.5181 | 0.6381 | 0.6381 | | 0.0841 | 8.8 | 9000 | 2.2699 | 0.6359 | 0.6359 | 0.5065 | 0.6359 | 0.6359 | | 0.0598 | 9.78 | 10000 | 2.3310 | 0.6388 | 0.6388 | 0.5001 | 0.6388 | 0.6388 | ### Framework versions - Transformers 4.19.0 - Pytorch 1.8.2+cu111 - Datasets 1.6.2 - Tokenizers 0.12.1
3,079
acho0057/sentiment_analysis_custom
[ "Negative", "Neutral", "Positive" ]
0
2umm3r/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5155709926752544 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7816 - Matthews Correlation: 0.5156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5291 | 1.0 | 535 | 0.5027 | 0.4092 | | 0.3492 | 2.0 | 1070 | 0.5136 | 0.4939 | | 0.2416 | 3.0 | 1605 | 0.6390 | 0.5056 | | 0.1794 | 4.0 | 2140 | 0.7816 | 0.5156 | | 0.1302 | 5.0 | 2675 | 0.8836 | 0.5156 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
1,999
Alireza1044/albert-base-v2-mnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model_index: - name: mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metric: name: Accuracy type: accuracy value: 0.8500813669650122 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 - Accuracy: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
1,371
CleveGreen/JobClassifier
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_100", "LABEL_101", "LABEL_102", "LABEL_103", "LABEL_104", "LABEL_105", "LABEL_106", "LABEL_107", "LABEL_108", "LABEL_109", "LABEL_11", "LABEL_110", "LABEL_111", "LABEL_112", "LABEL_113", "LABEL_114", "LABEL_115", "LABEL_116", "LABEL_...
Entry not found
15
DeadBeast/emoBERTTamil
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tamilmixsentiment metrics: - accuracy model_index: - name: emoBERTTamil results: - task: name: Text Classification type: text-classification dataset: name: tamilmixsentiment type: tamilmixsentiment args: default metric: name: Accuracy type: accuracy value: 0.671 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emoBERTTamil This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tamilmixsentiment dataset. It achieves the following results on the evaluation set: - Loss: 0.9666 - Accuracy: 0.671 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1128 | 1.0 | 250 | 1.0290 | 0.672 | | 1.0226 | 2.0 | 500 | 1.0172 | 0.686 | | 0.9137 | 3.0 | 750 | 0.9666 | 0.671 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
1,556
DeadBeast/mbert-base-cased-finetuned-bengali-fakenews
null
--- language: bengali license: apache-2.0 datasets: - BanFakeNews --- # **mBERT-base-cased-finetuned-bengali-fakenews** This model is a fine-tune checkpoint of mBERT-base-cased over **[Bengali-fake-news Dataset](https://www.kaggle.com/cryptexcode/banfakenews)** for Text classification. This model reaches an accuracy of 96.3 with an f1-score of 79.1 on the dev set. ### **How to use?** **Task**: binary-classification - LABEL_1: Authentic (*Authentic means news is authentic*) - LABEL_0: Fake (*Fake means news is fake*) ``` from transformers import pipeline print(pipeline("sentiment-analysis",model="DeadBeast/mbert-base-cased-finetuned-bengali-fakenews",tokenizer="DeadBeast/mbert-base-cased-finetuned-bengali-fakenews")("অভিনেতা আফজাল শরীফকে ২০ লাখ টাকার অনুদান অসুস্থ অভিনেতা আফজাল শরীফকে চিকিৎসার জন্য ২০ লাখ টাকা অনুদান দিয়েছেন প্রধানমন্ত্রী শেখ হাসিনা।")) ```
874
EnsarEmirali/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9268984054036417 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2131 - Accuracy: 0.9265 - F1: 0.9269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 | | 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
1,801
Fengkai/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9385 - name: F1 type: f1 value: 0.9383492808338979 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1495 - Accuracy: 0.9385 - F1: 0.9383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1739 | 1.0 | 250 | 0.1827 | 0.931 | 0.9302 | | 0.1176 | 2.0 | 500 | 0.1567 | 0.9325 | 0.9326 | | 0.0994 | 3.0 | 750 | 0.1555 | 0.9385 | 0.9389 | | 0.08 | 4.0 | 1000 | 0.1496 | 0.9445 | 0.9443 | | 0.0654 | 5.0 | 1250 | 0.1495 | 0.9385 | 0.9383 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
2,025
Giannipinelli/xlm-roberta-base-finetuned-marc-en
[ "good", "great", "ok", "poor", "terrible" ]
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9161 - Mae: 0.4634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1217 | 1.0 | 235 | 0.9396 | 0.4878 | | 0.9574 | 2.0 | 470 | 0.9161 | 0.4634 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,430
Hate-speech-CNERG/deoffxlmr-mono-kannada
[ "Not_offensive", "Not_in_intended_language", "Off_target_other", "Off_target_group", "Profanity", "Off_target_ind" ]
--- language: kn license: apache-2.0 --- This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
2,514
M47Labs/english_news_classification_headlines
[ "arts, culture, entertainment and media", "conflict, war and peace", "crime, law and justice", "disaster, accident and emergency incident", "economy, business and finance", "education", "enviroment", "health", "labour", "lifestyle and leisure", "politics", "religion and belief", "science and...
Entry not found
15
Manishl7/xlm-roberta-large-language-detection
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
Language Detection Model for Nepali, English, Hindi and Spanish Model fine tuned on xlm-roberta-large
101
MhF/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9217985126397109 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2232 - Accuracy: 0.9215 - F1: 0.9218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8098 | 1.0 | 250 | 0.3138 | 0.9025 | 0.9001 | | 0.2429 | 2.0 | 500 | 0.2232 | 0.9215 | 0.9218 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,807
MohammadABH/bertweet-finetuned-rbam
[ "attack", "neutral", "support" ]
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: bertweet-finetuned-rbam results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertweet-finetuned-rbam This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3971 - F1: 0.6620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7138 | 1.0 | 1632 | 0.7529 | 0.6814 | | 0.5692 | 2.0 | 3264 | 0.8473 | 0.6803 | | 0.4126 | 3.0 | 4896 | 1.0029 | 0.6617 | | 0.2854 | 4.0 | 6528 | 1.2167 | 0.6635 | | 0.2007 | 5.0 | 8160 | 1.3971 | 0.6620 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
1,545
MoritzLaurer/MiniLM-L6-mnli
[ "contradiction", "entailment", "neutral" ]
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy widget: - text: "I liked the movie. [SEP] The movie was good." --- # MiniLM-L6-mnli ## Model description This model was trained on the [MultiNLI](https://huggingface.co/datasets/multi_nli) dataset. The base model is MiniLM-L6 from Microsoft, which is very fast, but a bit less accurate than other models. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/MiniLM-L6-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I liked the movie" hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data [MultiNLI](https://huggingface.co/datasets/multi_nli). ### Training procedure MiniLM-L6-mnli-binary was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the (matched) test set from MultiNLI. Accuracy: 0.814 ## Limitations and bias Please consult the original MiniLM paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
2,310
ReynaQuita/twitter_disaster_distilbert
null
Entry not found
15
SetFit/deberta-v3-large__sst2__train-16-8
[ "negative", "positive" ]
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large__sst2__train-16-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-8 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6915 - Accuracy: 0.6579 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7129 | 1.0 | 7 | 0.7309 | 0.2857 | | 0.6549 | 2.0 | 14 | 0.7316 | 0.4286 | | 0.621 | 3.0 | 21 | 0.7131 | 0.5714 | | 0.3472 | 4.0 | 28 | 0.5703 | 0.4286 | | 0.2041 | 5.0 | 35 | 0.6675 | 0.5714 | | 0.031 | 6.0 | 42 | 1.6750 | 0.5714 | | 0.0141 | 7.0 | 49 | 1.8743 | 0.5714 | | 0.0055 | 8.0 | 56 | 1.1778 | 0.5714 | | 0.0024 | 9.0 | 63 | 1.0699 | 0.5714 | | 0.0019 | 10.0 | 70 | 1.0933 | 0.5714 | | 0.0012 | 11.0 | 77 | 1.1218 | 0.7143 | | 0.0007 | 12.0 | 84 | 1.1468 | 0.7143 | | 0.0006 | 13.0 | 91 | 1.1584 | 0.7143 | | 0.0006 | 14.0 | 98 | 1.3092 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
2,216
TransQuest/monotransquest-hter-de_en-pharmaceutical
[ "LABEL_0" ]
--- language: de-en tags: - Quality Estimation - monotransquest - hter license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-de_en-pharmaceutical", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
5,415
aXhyra/presentation_emotion_42
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: presentation_emotion_42 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: emotion metrics: - name: F1 type: f1 value: 0.732897530282475 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_emotion_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0989 - F1: 0.7329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.18796906442746e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3703 | 1.0 | 408 | 0.6624 | 0.7029 | | 0.2122 | 2.0 | 816 | 0.6684 | 0.7258 | | 0.9452 | 3.0 | 1224 | 1.0001 | 0.7041 | | 0.0023 | 4.0 | 1632 | 1.0989 | 0.7329 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
1,772
annafavaro/bert-base-uncased-finetuned-addresso
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-addresso results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-addresso This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.12.5 - Pytorch 1.8.1 - Datasets 1.15.1 - Tokenizers 0.10.3
1,037
anthonymirand/haha_2019_adaptation_task
[ "LABEL_0" ]
Entry not found
15
benjaminbeilharz/bert-base-uncased-empatheticdialogues-sentiment-classifier
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
--- dataset: empathetic_dialogues ---
38
beomi/beep-KcELECTRA-base-bias
[ "gender", "none", "others" ]
Entry not found
15
chgk13/tiny_russian_toxic_bert
[ "neutral", "toxic" ]
Entry not found
15
cscottp27/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9232542847906783 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2175 - Accuracy: 0.923 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 | | 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,805
dhtocks/tunib-electra-stereotype-classifier
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6" ]
### TUNiB-Electra Stereotype Detector Finetuned TUNiB-Electra base with K-StereoSet. Original Code: https://github.com/newfull5/Stereotype-Detector
149
diwank/silicone-deberta-pair
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
--- license: mit --- # diwank/silicone-deberta-pair `deberta-base`-based dialog acts classifier. Trained on the `balanced` variant of the [silicone-merged](https://huggingface.co/datasets/diwank/silicone-merged) dataset: a simplified merged dialog act data from datasets in the [silicone](https://huggingface.co/datasets/silicone) collection. Takes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. **Outputs one of 11 labels**: ```python (0, 'acknowledge') (1, 'answer') (2, 'backchannel') (3, 'reply_yes') (4, 'exclaim') (5, 'say') (6, 'reply_no') (7, 'hold') (8, 'ask') (9, 'intent') (10, 'ask_yes_no') ``` ## Example: ```python from simpletransformers.classification import ( ClassificationModel, ClassificationArgs ) model = ClassificationModel("deberta", "diwank/silicone-deberta-pair") convert_to_label = lambda n: [ ['acknowledge', 'answer', 'backchannel', 'reply_yes', 'exclaim', 'say', 'reply_no', 'hold', 'ask', 'intent', 'ask_yes_no' ][i] for i in n ] predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]]) convert_to_label(predictions) # answer ``` ## Report from W&B https://wandb.ai/diwank/da-silicone-combined/reports/silicone-deberta-pair--VmlldzoxNTczNjE5?accessToken=yj1jz4c365z0y5b3olgzye7qgsl7qv9lxvqhmfhtb6300hql6veqa5xiq1skn8ys
1,552
dmiller1/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261144741040841 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2161 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 | | 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.7.1 - Datasets 1.17.0 - Tokenizers 0.10.3
1,798
emrecan/bert-base-turkish-cased-snli_tr
[ "contradiction", "entailment", "neutral" ]
--- language: - tr tags: - zero-shot-classification - nli - pytorch pipeline_tag: zero-shot-classification license: apache-2.0 datasets: - nli_tr widget: - text: "Dolar yükselmeye devam ediyor." candidate_labels: "ekonomi, siyaset, spor" - text: "Senaryo çok saçmaydı, beğendim diyemem." candidate_labels: "olumlu, olumsuz" ---
332
justinqbui/bertweet-covid-vaccine-tweets-finetuned
[ "false", "misleading", "true" ]
--- tags: model-index: - name: bertweet-covid--vaccine-tweets-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets This model is a fine-tuned version of [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) which was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine. It achieves the following results on the evaluation set (20% from the dataset randomly shuffled and selected to serve as a test set): - Validation Loss: 0.267367 - Accuracy: 91.1370% To use the model, use the inference API. Alternatively, to run locally ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned") model = AutoModelForSequenceClassification.from_pretrained("justinqbui/bertweet-covid-vaccine-tweets-finetuned") ``` ## Model description This model is a fine-tuned version of pretrained version [justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets). Click on [this](https://huggingface.co/justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets) to see how the pre-training was done. This model was fine-tuned with a dataset of ~5500. A web scraper was used to scrape polifact and a script was used to pull from the google fact check API. Because ~80% of both these datasets were either false or misleading, I pulled about ~1200 tweets from the CDC related to covid and labelled them as true. ~30% of this dataset is considered true and the rest false or misleading. Please see the published datasets above for more detailed information. The tokenizer requires the emoji library to be installed. ``` !pip install nltk emoji ``` ## Intended uses & limitations The intended use of this model is to detect if the contents of a covid tweet is potentially false or misleading. This model is not an end all be all. It has many limitations. For example, if someone makes a post containing an image, but has attached a satirical image, this model would not be able to distinguish this. If a user links a website, the tokenizer allocates a special token for links, meaning the contents of the linked website is completely lost. If someone tweets a reply, this model can't look at the parent tweets, and will lack context. This model's dataset relies on the crowd-sourcing annotations being accurate. This data is only accurate of up until early December 2021. For example, it probably wouldn't do very ell with tweets regarded the new omicron variant. Example true inputs: ``` Covid vaccines are safe and effective. -> 97% true Vaccinations are safe and help prevent covid. -> 97% true ``` Example false inputs: ``` Covid vaccines will kill you. -> 97% false covid vaccines make you infertile. -> 97% false ``` ## Training and evaluation data This model was finetuned by using [this google fact check](https://huggingface.co/datasets/justinqbui/covid_fact_checked_google_api) ~3k dataset size and webscraped data from [polifact covid info](https://huggingface.co/datasets/justinqbui/covid_fact_checked_polifact) ~ 1200 dataset size and ~1200 tweets pulled from the CDC with tweets containing the words covid or vaccine. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-5 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - ### Training results | Training Loss | Epoch | Validation Loss | Accuracy | |:-------------:|:-----:|:---------------:|:--------:| | 0.435500 | 1.0 | 0.401900 | 0.906893 | | 0.309700 | 2.0 | 0.265500 | 0.907789 | | 0.266200 | 3.0 | 0.216500 | 0.911370 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
4,609
l3cube-pune/hate-multi-roberta-hasoc-hindi
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
--- language: hi tags: - roberta license: cc-by-4.0 datasets: - HASOC 2021 widget: - text: "I like you. </s></s> I love you." --- ## hate-roberta-hasoc-hindi hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021. The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane. More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200) ``` @article{velankar2021hate, title={Hate and Offensive Speech Detection in Hindi and Marathi}, author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj}, journal={arXiv preprint arXiv:2110.12200}, year={2021} } ```
742
lewtun/bert-base-uncased-finetuned-boolq
null
Entry not found
15
mrm8488/RuPERTa-base-finetuned-pawsx-es
null
--- language: es datasets: - xtreme tags: - nli widget: - text: "En 2009 se mudó a Filadelfia y en la actualidad vive en Nueva York. Se mudó nuevamente a Filadelfia en 2009 y ahora vive en la ciudad de Nueva York." --- # RuPERTa-base fine-tuned on PAWS-X-es for Paraphrase Identification (NLI)
295
mrm8488/camembert-base-finetuned-movie-review-sentiment-analysis
null
Entry not found
15
pertschuk/albert-base-squad-classifier-ms
null
Entry not found
15
pertschuk/albert-base-squad-classifier
null
Entry not found
15
severo/autonlp-sentiment_detection-1781580
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - severo/autonlp-data-sentiment_detection-3c8bcd36 --- # Model Trained Using AutoNLP _debug - I want to update this model_ - Problem type: Binary Classification - Model ID: 1781580 ## Validation Metrics - Loss: 0.16026505827903748 - Accuracy: 0.9426 - Precision: 0.9305057745917961 - Recall: 0.95406288280931 - AUC: 0.9861051024994563 - F1: 0.9421370967741935 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/severo/autonlp-sentiment_detection-1781580 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("severo/autonlp-sentiment_detection-1781580", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("severo/autonlp-sentiment_detection-1781580", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,134
textattack/xlnet-large-cased-CoLA
null
Entry not found
15
unideeplearning/polibert_sa
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: it tags: - sentiment - Italian license: mit widget: - text: Giuseppe Rossi è un ottimo politico --- # 🤗 + polibert_SA - POLItic BERT based Sentiment Analysis ## Model description This model performs sentiment analysis on Italian political twitter sentences. It was trained starting from an instance of "bert-base-italian-uncased-xxl" and fine-tuned on an Italian dataset of tweets. You can try it out at https://www.unideeplearning.com/twitter_sa/ (in italian!) #### Hands-on ```python import torch from torch import nn from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("unideeplearning/polibert_sa") model = AutoModelForSequenceClassification.from_pretrained("unideeplearning/polibert_sa") text = "Giuseppe Rossi è un pessimo politico" input_ids = tokenizer.encode(text, add_special_tokens=True, return_tensors= 'pt') logits, = model(input_ids) logits = logits.squeeze(0) prob = nn.functional.softmax(logits, dim=0) # 0 Negative, 1 Neutral, 2 Positive print(prob.argmax().tolist()) ``` #### Hyperparameters - Optimizer: **AdamW** with learning rate of **2e-5**, epsilon of **1e-8** - Max epochs: **2** - Batch size: **16** ## Acknowledgments Thanks to the support from: the [Hugging Face](https://huggingface.co/), https://www.unioneprofessionisti.com https://www.unideeplearning.com/
1,407
warwickai/fin-perceiver
[ "negative", "neutral", "positive" ]
--- language: "en" license: apache-2.0 tags: - financial-sentiment-analysis - sentiment-analysis - language-perceiver datasets: - financial_phrasebank widget: - text: "INDEX100 fell sharply today." - text: "ImaginaryJetCo bookings hit by Omicron variant as losses total £1bn." - text: "Q1 ImaginaryGame's earnings beat expectations." - text: "Should we buy IMAGINARYSTOCK today?" metrics: - recall - f1 - accuracy - precision model-index: - name: fin-perceiver results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_50agree metrics: - name: Accuracy type: accuracy value: 0.8624 - name: F1 type: f1 value: 0.8416 args: macro - name: Precision type: precision value: 0.8438 args: macro - name: Recall type: recall value: 0.8415 args: macro --- # FINPerceiver FINPerceiver is a fine-tuned Perceiver IO language model for financial sentiment analysis. More details on the training process of this model are available on the [GitHub repository](https://github.com/warwickai/fin-perceiver). Weights & Biases was used to track experiments. We achieved the following results with 10-fold cross validation. ``` eval/accuracy 0.8624 (stdev 0.01922) eval/f1 0.8416 (stdev 0.03738) eval/loss 0.4314 (stdev 0.05295) eval/precision 0.8438 (stdev 0.02938) eval/recall 0.8415 (stdev 0.04458) ``` The hyperparameters used are as follows. ``` per_device_train_batch_size 16 per_device_eval_batch_size 16 num_train_epochs 4 learning_rate 2e-5 ``` ## Datasets This model was trained on the Financial PhraseBank (>= 50% agreement)
1,781
yoshitomo-matsubara/bert-large-uncased-wnli
null
--- language: en tags: - bert - wnli - glue - torchdistill license: apache-2.0 datasets: - wnli metrics: - accuracy --- `bert-large-uncased` fine-tuned on WNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/ce/bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
828
microsoft/tapex-base-finetuned-tabfact
[ "Entailed", "Refused" ]
--- language: en tags: - tapex datasets: - tab_fact license: mit --- # TAPEX (base-sized model) TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining). ## Model description TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries. TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset. ## Intended Uses You can use the model for table fact verficiation. ### How to Use Here is how to use this model in transformers: ```python from transformers import TapexTokenizer, BartForSequenceClassification import pandas as pd tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-tabfact") model = BartForSequenceClassification.from_pretrained("microsoft/tapex-base-finetuned-tabfact") data = { "year": [1896, 1900, 1904, 2004, 2008, 2012], "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"] } table = pd.DataFrame.from_dict(data) # tapex accepts uncased input since it is pre-trained on the uncased corpus query = "beijing hosts the olympic games in 2012" encoding = tokenizer(table=table, query=query, return_tensors="pt") outputs = model(**encoding) output_id = int(outputs.logits[0].argmax(dim=0)) print(model.config.id2label[output_id]) # Refused ``` ### How to Eval Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex). ### BibTeX entry and citation info ```bibtex @inproceedings{ liu2022tapex, title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor}, author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=O50443AsCP} } ```
2,546
cnu/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5474713423103301 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8651 - Matthews Correlation: 0.5475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5233 | 1.0 | 535 | 0.5353 | 0.4004 | | 0.3497 | 2.0 | 1070 | 0.5165 | 0.5076 | | 0.2386 | 3.0 | 1605 | 0.6661 | 0.5161 | | 0.1745 | 4.0 | 2140 | 0.7730 | 0.5406 | | 0.1268 | 5.0 | 2675 | 0.8651 | 0.5475 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.6
1,994
nickmuchi/sec-bert-finetuned-finance-classification
[ "bearish", "bullish", "neutral" ]
--- license: cc-by-sa-4.0 tags: - financial-sentiment-analysis - sentiment-analysis - sentence_50agree - generated_from_trainer - financial - stocks - sentiment datasets: - financial_phrasebank - Kaggle Self label - nickmuchi/financial-classification metrics: - accuracy - f1 - precision - recall widget: - text: "The USD rallied by 10% last night" example_title: "Bullish Sentiment" - text: "Covid-19 cases have been increasing over the past few months impacting earnings for global firms" example_title: "Bearish Sentiment" - text: "the USD has been trending lower" example_title: "Mildly Bearish Sentiment" model-index: - name: sec-bert-finetuned-finance-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sec-bert-finetuned-finance-classification This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset). It achieves the following results on the evaluation set: - Loss: 0.5277 - Accuracy: 0.8755 - F1: 0.8744 - Precision: 0.8754 - Recall: 0.8755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6005 | 0.99 | 71 | 0.3702 | 0.8478 | 0.8465 | 0.8491 | 0.8478 | | 0.3226 | 1.97 | 142 | 0.3172 | 0.8834 | 0.8822 | 0.8861 | 0.8834 | | 0.2299 | 2.96 | 213 | 0.3313 | 0.8814 | 0.8805 | 0.8821 | 0.8814 | | 0.1277 | 3.94 | 284 | 0.3925 | 0.8775 | 0.8771 | 0.8770 | 0.8775 | | 0.0764 | 4.93 | 355 | 0.4517 | 0.8715 | 0.8704 | 0.8717 | 0.8715 | | 0.0533 | 5.92 | 426 | 0.4851 | 0.8735 | 0.8728 | 0.8731 | 0.8735 | | 0.0363 | 6.9 | 497 | 0.5107 | 0.8755 | 0.8743 | 0.8757 | 0.8755 | | 0.0248 | 7.89 | 568 | 0.5277 | 0.8755 | 0.8744 | 0.8754 | 0.8755 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
3,159
cloudblack/bert-base-finetuned-sts
[ "LABEL_0" ]
Entry not found
15
clapika2010/flights_finetuned
null
Entry not found
15