modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
connectivity/feather_berts_98
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
connectivity/bert_ft_qqp-18
null
Entry not found
15
connectivity/bert_ft_qqp-19
null
Entry not found
15
connectivity/bert_ft_qqp-20
null
Entry not found
15
connectivity/bert_ft_qqp-21
null
Entry not found
15
connectivity/bert_ft_qqp-23
null
Entry not found
15
connectivity/bert_ft_qqp-25
null
Entry not found
15
connectivity/bert_ft_qqp-26
null
Entry not found
15
connectivity/bert_ft_qqp-27
null
Entry not found
15
connectivity/bert_ft_qqp-29
null
Entry not found
15
connectivity/bert_ft_qqp-32
null
Entry not found
15
connectivity/bert_ft_qqp-33
null
Entry not found
15
connectivity/bert_ft_qqp-34
null
Entry not found
15
connectivity/bert_ft_qqp-37
null
Entry not found
15
connectivity/bert_ft_qqp-38
null
Entry not found
15
connectivity/bert_ft_qqp-39
null
Entry not found
15
connectivity/bert_ft_qqp-41
null
Entry not found
15
connectivity/bert_ft_qqp-42
null
Entry not found
15
connectivity/bert_ft_qqp-46
null
Entry not found
15
connectivity/bert_ft_qqp-47
null
Entry not found
15
connectivity/bert_ft_qqp-49
null
Entry not found
15
connectivity/bert_ft_qqp-50
null
Entry not found
15
connectivity/bert_ft_qqp-51
null
Entry not found
15
connectivity/bert_ft_qqp-52
null
Entry not found
15
connectivity/bert_ft_qqp-53
null
Entry not found
15
connectivity/bert_ft_qqp-54
null
Entry not found
15
connectivity/bert_ft_qqp-55
null
Entry not found
15
connectivity/bert_ft_qqp-56
null
Entry not found
15
connectivity/bert_ft_qqp-57
null
Entry not found
15
connectivity/bert_ft_qqp-58
null
Entry not found
15
connectivity/bert_ft_qqp-59
null
Entry not found
15
connectivity/bert_ft_qqp-60
null
Entry not found
15
connectivity/bert_ft_qqp-61
null
Entry not found
15
connectivity/bert_ft_qqp-62
null
Entry not found
15
connectivity/bert_ft_qqp-63
null
Entry not found
15
connectivity/bert_ft_qqp-64
null
Entry not found
15
connectivity/bert_ft_qqp-65
null
Entry not found
15
connectivity/bert_ft_qqp-67
null
Entry not found
15
connectivity/bert_ft_qqp-68
null
Entry not found
15
connectivity/bert_ft_qqp-69
null
Entry not found
15
connectivity/bert_ft_qqp-71
null
Entry not found
15
connectivity/bert_ft_qqp-72
null
Entry not found
15
connectivity/bert_ft_qqp-73
null
Entry not found
15
connectivity/cola_6ep_ft-1
null
Entry not found
15
connectivity/bert_ft_qqp-74
null
Entry not found
15
connectivity/cola_6ep_ft-2
null
Entry not found
15
connectivity/cola_6ep_ft-3
null
Entry not found
15
connectivity/cola_6ep_ft-4
null
Entry not found
15
connectivity/cola_6ep_ft-5
null
Entry not found
15
connectivity/cola_6ep_ft-6
null
Entry not found
15
connectivity/cola_6ep_ft-7
null
Entry not found
15
connectivity/bert_ft_qqp-76
null
Entry not found
15
connectivity/cola_6ep_ft-8
null
Entry not found
15
connectivity/cola_6ep_ft-9
null
Entry not found
15
connectivity/cola_6ep_ft-10
null
Entry not found
15
connectivity/cola_6ep_ft-11
null
Entry not found
15
connectivity/cola_6ep_ft-12
null
Entry not found
15
connectivity/cola_6ep_ft-13
null
Entry not found
15
connectivity/cola_6ep_ft-14
null
Entry not found
15
connectivity/cola_6ep_ft-15
null
Entry not found
15
connectivity/bert_ft_qqp-77
null
Entry not found
15
connectivity/cola_6ep_ft-16
null
Entry not found
15
connectivity/cola_6ep_ft-17
null
Entry not found
15
connectivity/cola_6ep_ft-18
null
Entry not found
15
connectivity/cola_6ep_ft-19
null
Entry not found
15
connectivity/cola_6ep_ft-20
null
Entry not found
15
connectivity/cola_6ep_ft-21
null
Entry not found
15
connectivity/cola_6ep_ft-22
null
Entry not found
15
connectivity/cola_6ep_ft-23
null
Entry not found
15
connectivity/bert_ft_qqp-78
null
Entry not found
15
connectivity/cola_6ep_ft-24
null
Entry not found
15
connectivity/cola_6ep_ft-25
null
Entry not found
15
connectivity/cola_6ep_ft-26
null
Entry not found
15
connectivity/cola_6ep_ft-27
null
Entry not found
15
connectivity/cola_6ep_ft-28
null
Entry not found
15
connectivity/cola_6ep_ft-29
null
Entry not found
15
connectivity/cola_6ep_ft-30
null
Entry not found
15
connectivity/cola_6ep_ft-31
null
Entry not found
15
connectivity/bert_ft_qqp-79
null
Entry not found
15
connectivity/cola_6ep_ft-32
null
Entry not found
15
connectivity/cola_6ep_ft-33
null
Entry not found
15
connectivity/cola_6ep_ft-34
null
Entry not found
15
connectivity/cola_6ep_ft-35
null
Entry not found
15
connectivity/cola_6ep_ft-36
null
Entry not found
15
connectivity/cola_6ep_ft-37
null
Entry not found
15
connectivity/bert_ft_qqp-82
null
Entry not found
15
connectivity/bert_ft_qqp-84
null
Entry not found
15
connectivity/bert_ft_qqp-87
null
Entry not found
15
connectivity/bert_ft_qqp-88
null
Entry not found
15
connectivity/bert_ft_qqp-89
null
Entry not found
15
connectivity/bert_ft_qqp-93
null
Entry not found
15
connectivity/bert_ft_qqp-94
null
Entry not found
15
connectivity/bert_ft_qqp-95
null
Entry not found
15
connectivity/bert_ft_qqp-99
null
Entry not found
15
joaobarroca/distilbert-base-uncased-finetuned-massive-intent-detection-english
[ "alarm_query", "alarm_remove", "alarm_set", "audio_volume_down", "audio_volume_mute", "audio_volume_other", "audio_volume_up", "calendar_query", "calendar_remove", "calendar_set", "cooking_query", "cooking_recipe", "datetime_convert", "datetime_query", "email_addcontact", "email_query"...
--- license: apache-2.0 tags: - generated_from_trainer datasets: - massive metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-massive-intent-detection-english results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive args: en-US metrics: - name: Accuracy type: accuracy value: 0.886684599865501 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-massive-intent-detection-english This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.4873 - Accuracy: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5849 | 1.0 | 360 | 1.3826 | 0.7359 | | 1.0662 | 2.0 | 720 | 0.7454 | 0.8357 | | 0.5947 | 3.0 | 1080 | 0.5668 | 0.8642 | | 0.3824 | 4.0 | 1440 | 0.5007 | 0.8770 | | 0.2649 | 5.0 | 1800 | 0.4829 | 0.8824 | | 0.1877 | 6.0 | 2160 | 0.4843 | 0.8824 | | 0.1377 | 7.0 | 2520 | 0.4858 | 0.8834 | | 0.1067 | 8.0 | 2880 | 0.4924 | 0.8864 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
2,122
joebobby/finetuning-sentiment-model-5000-samples
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-5000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-5000-samples This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0701 - Accuracy: 0.758 - F1: 0.7580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 313 | 1.0216 | 0.744 | 0.744 | | 0.2263 | 2.0 | 626 | 1.0701 | 0.758 | 0.7580 | | 0.2263 | 3.0 | 939 | 1.3097 | 0.723 | 0.723 | | 0.1273 | 4.0 | 1252 | 1.4377 | 0.743 | 0.743 | | 0.051 | 5.0 | 1565 | 1.4884 | 0.739 | 0.739 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,700
aliosm/sha3bor-footer-101-arabertv02-base
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_100", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28"...
--- language: ar license: mit widget: - text: "إن العيون التي في طرفها حور" - text: "إذا ما فعلت الخير ضوعف شرهم" - text: "واحر قلباه ممن قلبه شبم" ---
152
Abdelrahman-Rezk/bert-base-arabic-camelbert-mix-poetry-finetuned-qawaf2
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
Entry not found
15
PDRES/roberta-base-bne-finetuned-amazon_reviews_multi
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,130
GioReg/dbmdzBERTnews
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: dbmdzBERTnews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dbmdzBERTnews This model is a fine-tuned version of [dbmdz/bert-base-italian-uncased](https://huggingface.co/dbmdz/bert-base-italian-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0960 - Accuracy: 0.9733 - F1: 0.9730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,171