modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Jeevesh8/feather_berts_44 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_45 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_46 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_47 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_48 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_49 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_50 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_51 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_52 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_53 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_54 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_55 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_56 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_57 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_58 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_59 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_60 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_61 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_62 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_63 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_64 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_65 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_66 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_68 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_69 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_70 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_71 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_72 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_74 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_76 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_78 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_79 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_80 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_81 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_82 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_83 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_84 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_86 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_87 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_88 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_89 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_90 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_91 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_92 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_94 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_95 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_96 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_97 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_98 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/feather_berts_99 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Parsa/Buchwald-Hartwig-Yield-prediction | [
"LABEL_0"
] | Buchwald-Hartwig-Yield-prediction is a finetuned model based on 'DeepChem/ChemBERTa-77M-MLM' for yield prediction.
For training and testing the model, 'https://tdcommons.ai/single_pred_tasks/yields' data was used with 70/30 random splitting for the train and test dataset.
the R2 score is equal to 97.2879% and val_loss is equal to 0.0020.
for using it, your input should look like the following: 'reactant smiles''>>''product' with no spaces. For using it, do not use the Hosted inference API. instead, download it yourself or use the colab link below.
[](https://colab.research.google.com/drive/1UyQwPaHmH5BiEa0yZyuZPmMsVi-hIms0#scrollTo=DKy4QptyYTqz)
Github repo: https://github.com/mephisto121/Buchwald-Hartwig-Yield-prediction
| 808 |
bdickson/distilbert-base-uncased-finetuned-cola | null | Entry not found | 15 |
anshr/distilgpt2_reward_model_01 | null | Entry not found | 15 |
crcb/carer_new | [
"anger",
"fear",
"sadness",
"surprise"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-carer_new
co2_eq_emissions: 3.9861818439722594
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 781623992
- CO2 Emissions (in grams): 3.9861818439722594
## Validation Metrics
- Loss: 0.1639203429222107
- Accuracy: 0.9389179755671903
- Macro F1: 0.9055551236566716
- Micro F1: 0.9389179755671903
- Weighted F1: 0.9379300009988988
- Macro Precision: 0.9466951148514304
- Micro Precision: 0.9389179755671903
- Weighted Precision: 0.9435523016000105
- Macro Recall: 0.8818551804621082
- Micro Recall: 0.9389179755671903
- Weighted Recall: 0.9389179755671903
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-carer_new-781623992
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-carer_new-781623992", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-carer_new-781623992", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,377 |
MatthewAlanPow1/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5421747077088894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7994
- Matthews Correlation: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.42 | 1.0 | 535 | 0.4631 | 0.5242 |
| 0.2823 | 2.0 | 1070 | 0.5755 | 0.5056 |
| 0.1963 | 3.0 | 1605 | 0.6767 | 0.5478 |
| 0.1441 | 4.0 | 2140 | 0.7742 | 0.5418 |
| 0.1069 | 5.0 | 2675 | 0.7994 | 0.5422 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,999 |
anshr/distilgpt2_reward_model_03 | null | Entry not found | 15 |
anshr/distilgpt2_reward_model_04 | null | Entry not found | 15 |
crcb/carer_5way | [
"0",
"1",
"2",
"3",
"4"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-carer_5way
co2_eq_emissions: 4.164757528958762
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 786524275
- CO2 Emissions (in grams): 4.164757528958762
## Validation Metrics
- Loss: 0.16724252700805664
- Accuracy: 0.944234404536862
- Macro F1: 0.9437256923758108
- Micro F1: 0.9442344045368619
- Weighted F1: 0.9442368364749825
- Macro Precision: 0.9431692663638349
- Micro Precision: 0.944234404536862
- Weighted Precision: 0.9446229335037916
- Macro Recall: 0.9446884750469657
- Micro Recall: 0.944234404536862
- Weighted Recall: 0.944234404536862
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-carer_5way-786524275
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-carer_5way-786524275", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-carer_5way-786524275", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,376 |
cynthiachan/procedure_classification_bert | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_... | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-dAB-002 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
dimboump/glue_sst_classifier | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,993 |
Caroline-Vandyck/glue_sst_classifier | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,993 |
corvusMidnight/glue_sst_classifier_ | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier_
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier_
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,995 |
anshr/distilgpt2_reward_model_05 | null | Entry not found | 15 |
Rem59/autotrain-Test_2-789524315 | [
"-1",
"0",
"1"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Rem59/autotrain-data-Test_2
co2_eq_emissions: 2.0134443204822188
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 789524315
- CO2 Emissions (in grams): 2.0134443204822188
## Validation Metrics
- Loss: 0.8042349815368652
- Accuracy: 0.6904761904761905
- Macro F1: 0.27230046948356806
- Micro F1: 0.6904761904761905
- Weighted F1: 0.5640509725016768
- Macro Precision: 0.23015873015873015
- Micro Precision: 0.6904761904761905
- Weighted Precision: 0.4767573696145125
- Macro Recall: 0.3333333333333333
- Micro Recall: 0.6904761904761905
- Weighted Recall: 0.6904761904761905
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Rem59/autotrain-Test_2-789524315
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Rem59/autotrain-Test_2-789524315", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Rem59/autotrain-Test_2-789524315", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,371 |
EAST/autotrain-Rule-793324440 | [
"0",
"1"
] | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- EAST/autotrain-data-Rule
co2_eq_emissions: 0.0025078722090032795
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 793324440
- CO2 Emissions (in grams): 0.0025078722090032795
## Validation Metrics
- Loss: 0.31105440855026245
- Accuracy: 0.9473684210526315
- Precision: 0.9
- Recall: 1.0
- AUC: 0.9444444444444445
- F1: 0.9473684210526316
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/EAST/autotrain-Rule-793324440
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("EAST/autotrain-Rule-793324440", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("EAST/autotrain-Rule-793324440", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,118 |
caush/Clickbait3 | [
"LABEL_0"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Clickbait3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait3
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.05 | 50 | 0.0373 |
| No log | 0.1 | 100 | 0.0320 |
| No log | 0.15 | 150 | 0.0295 |
| No log | 0.21 | 200 | 0.0302 |
| No log | 0.26 | 250 | 0.0331 |
| No log | 0.31 | 300 | 0.0280 |
| No log | 0.36 | 350 | 0.0277 |
| No log | 0.41 | 400 | 0.0316 |
| No log | 0.46 | 450 | 0.0277 |
| 0.0343 | 0.51 | 500 | 0.0276 |
| 0.0343 | 0.56 | 550 | 0.0282 |
| 0.0343 | 0.62 | 600 | 0.0280 |
| 0.0343 | 0.67 | 650 | 0.0271 |
| 0.0343 | 0.72 | 700 | 0.0264 |
| 0.0343 | 0.77 | 750 | 0.0265 |
| 0.0343 | 0.82 | 800 | 0.0260 |
| 0.0343 | 0.87 | 850 | 0.0263 |
| 0.0343 | 0.92 | 900 | 0.0259 |
| 0.0343 | 0.97 | 950 | 0.0277 |
| 0.0278 | 1.03 | 1000 | 0.0281 |
| 0.0278 | 1.08 | 1050 | 0.0294 |
| 0.0278 | 1.13 | 1100 | 0.0256 |
| 0.0278 | 1.18 | 1150 | 0.0258 |
| 0.0278 | 1.23 | 1200 | 0.0254 |
| 0.0278 | 1.28 | 1250 | 0.0265 |
| 0.0278 | 1.33 | 1300 | 0.0252 |
| 0.0278 | 1.38 | 1350 | 0.0251 |
| 0.0278 | 1.44 | 1400 | 0.0264 |
| 0.0278 | 1.49 | 1450 | 0.0262 |
| 0.023 | 1.54 | 1500 | 0.0272 |
| 0.023 | 1.59 | 1550 | 0.0278 |
| 0.023 | 1.64 | 1600 | 0.0255 |
| 0.023 | 1.69 | 1650 | 0.0258 |
| 0.023 | 1.74 | 1700 | 0.0262 |
| 0.023 | 1.79 | 1750 | 0.0250 |
| 0.023 | 1.85 | 1800 | 0.0253 |
| 0.023 | 1.9 | 1850 | 0.0271 |
| 0.023 | 1.95 | 1900 | 0.0248 |
| 0.023 | 2.0 | 1950 | 0.0258 |
| 0.0224 | 2.05 | 2000 | 0.0252 |
| 0.0224 | 2.1 | 2050 | 0.0259 |
| 0.0224 | 2.15 | 2100 | 0.0254 |
| 0.0224 | 2.21 | 2150 | 0.0260 |
| 0.0224 | 2.26 | 2200 | 0.0254 |
| 0.0224 | 2.31 | 2250 | 0.0266 |
| 0.0224 | 2.36 | 2300 | 0.0258 |
| 0.0224 | 2.41 | 2350 | 0.0258 |
| 0.0224 | 2.46 | 2400 | 0.0256 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| 3,666 |
caush/Clickbait5 | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
model-index:
- name: Clickbait5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait5
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.04 | 50 | 0.0258 |
| No log | 0.08 | 100 | 0.0269 |
| No log | 0.12 | 150 | 0.0259 |
| No log | 0.16 | 200 | 0.0260 |
| No log | 0.21 | 250 | 0.0267 |
| No log | 0.25 | 300 | 0.0276 |
| No log | 0.29 | 350 | 0.0284 |
| No log | 0.33 | 400 | 0.0270 |
| No log | 0.37 | 450 | 0.0269 |
| 0.0195 | 0.41 | 500 | 0.0260 |
| 0.0195 | 0.45 | 550 | 0.0284 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,660 |
Rbanerjee/simpsons-character-discriminator | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
shahidul034/drug_sentiment_analysis | [
"bad",
"good"
] | Entry not found | 15 |
amirbr/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
| 1,046 |
adielsa/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5387376669923544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Matthews Correlation: 0.5387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5257 | 1.0 | 535 | 0.5286 | 0.4093 |
| 0.3447 | 2.0 | 1070 | 0.5061 | 0.4972 |
| 0.2303 | 3.0 | 1605 | 0.5878 | 0.5245 |
| 0.1761 | 4.0 | 2140 | 0.7969 | 0.5153 |
| 0.1346 | 5.0 | 2675 | 0.8256 | 0.5387 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,999 |
TehranNLP-org/electra-base-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: MNLI
type: ''
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8879266428935303
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4265
- Accuracy: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3762 | 1.0 | 12272 | 0.3312 | 0.8794 |
| 0.2542 | 2.0 | 24544 | 0.3467 | 0.8843 |
| 0.1503 | 3.0 | 36816 | 0.4265 | 0.8879 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,771 |
TehranNLP-org/bert-large-mnli | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: MNLI
type: ''
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8572592969943963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5092
- Accuracy: 0.8573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: not_parallel
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4736 | 1.0 | 12271 | 0.4213 | 0.8372 |
| 0.3248 | 2.0 | 24542 | 0.4055 | 0.8538 |
| 0.1571 | 3.0 | 36813 | 0.5092 | 0.8573 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,802 |
Yanael/bert-finetuned-mrpc | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-finetuned-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.8.1+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,049 |
Yanael/dummy-model | null | # Dummy Model
Following the Hugging Face course | 48 |
crcb/emo_go_new | [
"0",
"1",
"10",
"11",
"12",
"13",
"14",
"15",
"16",
"17",
"18",
"19",
"2",
"20",
"21",
"22",
"23",
"24",
"25",
"26",
"3",
"4",
"5",
"6",
"7",
"8",
"9"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- crcb/autotrain-data-go_emo_new
co2_eq_emissions: 20.58663910106142
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 813325491
- CO2 Emissions (in grams): 20.58663910106142
## Validation Metrics
- Loss: 1.3628994226455688
- Accuracy: 0.5920355494787216
- Macro F1: 0.4844439507523978
- Micro F1: 0.5920355494787216
- Weighted F1: 0.5873137663478112
- Macro Precision: 0.5458988948121151
- Micro Precision: 0.5920355494787216
- Weighted Precision: 0.591386299522425
- Macro Recall: 0.4753100798358001
- Micro Recall: 0.5920355494787216
- Weighted Recall: 0.5920355494787216
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-go_emo_new-813325491
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,378 |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2555
- Precision: 1.0
- Recall: 0.0200
- F1: 0.0393
- Accuracy: 0.0486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 95 | 0.5756 | nan | 0.0 | nan | 0.715 |
| No log | 2.0 | 190 | 0.5340 | 0.6429 | 0.1579 | 0.2535 | 0.735 |
| No log | 3.0 | 285 | 0.5298 | 0.5833 | 0.3684 | 0.4516 | 0.745 |
| No log | 4.0 | 380 | 0.5325 | 0.5789 | 0.3860 | 0.4632 | 0.745 |
| No log | 5.0 | 475 | 0.5452 | 0.4815 | 0.4561 | 0.4685 | 0.705 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 2,000 |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4527
- Precision: 0.2844
- Recall: 0.9676
- F1: 0.4395
- Accuracy: 0.2991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.1044 | 0.9742 | 1.0 | 0.9869 | 0.9742 |
| No log | 2.0 | 332 | 0.1269 | 0.9742 | 1.0 | 0.9869 | 0.9742 |
| No log | 3.0 | 498 | 0.1028 | 0.9742 | 1.0 | 0.9869 | 0.9742 |
| 0.0947 | 4.0 | 664 | 0.0836 | 0.9826 | 0.9971 | 0.9898 | 0.9799 |
| 0.0947 | 5.0 | 830 | 0.0884 | 0.9854 | 0.9912 | 0.9883 | 0.9771 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,999 |
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False | null | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0557
- Precision: 0.9930
- Recall: 0.9878
- F1: 0.9904
- Accuracy: 0.9814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 479 | 0.3334 | 0.9041 | 0.9041 | 0.9041 | 0.8550 |
| 0.3756 | 2.0 | 958 | 0.3095 | 0.8991 | 0.9251 | 0.9119 | 0.8649 |
| 0.2653 | 3.0 | 1437 | 0.3603 | 0.8929 | 0.9527 | 0.9218 | 0.8779 |
| 0.1991 | 4.0 | 1916 | 0.3907 | 0.8919 | 0.9540 | 0.9219 | 0.8779 |
| 0.1586 | 5.0 | 2395 | 0.3642 | 0.9070 | 0.9356 | 0.9211 | 0.8788 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,985 |
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8119
- Precision: 0.2752
- Recall: 0.9522
- F1: 0.4270
- Accuracy: 0.2849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0726 | 0.9827 | 1.0 | 0.9913 | 0.9828 |
| No log | 2.0 | 332 | 0.0569 | 0.9827 | 1.0 | 0.9913 | 0.9828 |
| No log | 3.0 | 498 | 0.0434 | 0.9884 | 1.0 | 0.9942 | 0.9885 |
| 0.1021 | 4.0 | 664 | 0.0505 | 0.9884 | 1.0 | 0.9942 | 0.9885 |
| 0.1021 | 5.0 | 830 | 0.0472 | 0.9884 | 1.0 | 0.9942 | 0.9885 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 2,053 |
caush/Clickbait4 | [
"LABEL_0"
] | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Clickbait1
results: []
---
This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on the Webis-Clickbait-17 dataset. It achieves the following results on the evaluation set:
Loss: 0.0261
The following list presents the current performances achieved by the participants. As primary evaluation measure, Mean Squared Error (MSE) with respect to the mean judgments of the annotators is used. Our result is 0,0261 for the MSE metric. We do not compute the other metrics. We try not to cheat using unknown data at the time of the challenge. We do not use k-fold cross validation techniques.
| team | MSE | F1 | Precision | Recall| Accuracy| Runtime |
|----- |----- |--- |-----------|-------|---------|-------- |
|goldfish | 0.024 | 0.741 | 0.739 | 0.742 | 0.876 | 16:20:21|
|caush | 0.026 | | | | | 00:11:00|
|monkfish | 0.026 | 0.694 | 0.785 | 0.622 | 0.870 | 03:41:35|
|dartfish | 0.027 | 0.706 | 0.733 | 0.681 | 0.865 | 00:47:07|
|torpedo19 | 0.03 | 0.677 | 0.755 | 0.614 | 0.861 | 00:52:44|
|albacore | 0.031 | 0.67 | 0.731 | 0.62 | 0.855 | 00:01:10|
|blobfish | 0.032 | 0.646 | 0.738 | 0.574 | 0.85 | 00:03:22|
|zingel | 0.033 | 0.683 | 0.719 | 0.65 | 0.856 | 00:03:27|
|anchovy | 0.034 | 0.68 | 0.717 | 0.645 | 0.855 | 00:07:20|
|ray | 0.034 | 0.684 | 0.691 | 0.677 | 0.851 | 00:29:28|
|icarfish | 0.035 | 0.621 | 0.768 | 0.522 | 0.849 | 01:02:57|
|emperor | 0.036 | 0.641 | 0.714 | 0.581 | 0.845 | 00:04:03|
|carpetshark | 0.036 | 0.638 | 0.728 | 0.568 | 0.847 | 00:08:05|
|electriceel | 0.038 | 0.588 | 0.727 | 0.493 | 0.835 | 01:04:54|
|arowana | 0.039 | 0.656 | 0.659 | 0.654 | 0.837 | 00:35:24|
|pineapplefish | 0.041 | 0.631 | 0.642 | 0.621 | 0.827 | 00:54:28|
|whitebait | 0.043 | 0.565 | 0.7 | 0.474 | 0.826 | 00:04:31| | 1,917 |
DioLiu/distilbert-base-uncased-finetuned-sst2-nostop | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-nostop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-nostop
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0701
- Accuracy: 0.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.125 | 1.0 | 1116 | 0.0975 | 0.9743 |
| 0.0599 | 2.0 | 2232 | 0.0692 | 0.9840 |
| 0.0191 | 3.0 | 3348 | 0.0570 | 0.9871 |
| 0.0109 | 4.0 | 4464 | 0.0660 | 0.9882 |
| 0.0092 | 5.0 | 5580 | 0.0701 | 0.9888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,644 |
chebmarcel/sun | null | Entry not found | 15 |
DioLiu/distilbert-base-uncased-finetuned-sst2-moreShake | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-moreShake
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-moreShake
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1864
- Accuracy: 0.9739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1208 | 1.0 | 1957 | 0.1102 | 0.9661 |
| 0.0516 | 2.0 | 3914 | 0.1222 | 0.9704 |
| 0.0223 | 3.0 | 5871 | 0.1574 | 0.9690 |
| 0.0071 | 4.0 | 7828 | 0.1997 | 0.9706 |
| 0.0026 | 5.0 | 9785 | 0.1864 | 0.9739 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,650 |
Someshfengde/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
YeRyeongLee/bert-base-uncased-finetuned-small-0505 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-small-0505
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-small-0505
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8649
- Accuracy: 0.1818
- F1: 0.1182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 13 | 1.8337 | 0.1818 | 0.0559 |
| No log | 2.0 | 26 | 1.8559 | 0.2727 | 0.1414 |
| No log | 3.0 | 39 | 1.8488 | 0.1818 | 0.1010 |
| No log | 4.0 | 52 | 1.8649 | 0.1818 | 0.1182 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,620 |
YeRyeongLee/mental-bert-base-uncased-finetuned-0505 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mental-bert-base-uncased-finetuned-0505
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental-bert-base-uncased-finetuned-0505
This model is a fine-tuned version of [mental/mental-bert-base-uncased](https://huggingface.co/mental/mental-bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4195
- Accuracy: 0.9181
- F1: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 1373 | 0.2846 | 0.9124 | 0.9119 |
| No log | 2.0 | 2746 | 0.3468 | 0.9132 | 0.9129 |
| No log | 3.0 | 4119 | 0.3847 | 0.9189 | 0.9192 |
| No log | 4.0 | 5492 | 0.4195 | 0.9181 | 0.9182 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,630 |
JoMart/distilbert-base-uncased-finetuned-cola | null | Entry not found | 15 |
DioLiu/distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0870
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2917 | 1.0 | 975 | 0.0703 | 0.9778 |
| 0.063 | 2.0 | 1950 | 0.0815 | 0.9821 |
| 0.0233 | 3.0 | 2925 | 0.0680 | 0.9866 |
| 0.0134 | 4.0 | 3900 | 0.0817 | 0.9866 |
| 0.0054 | 5.0 | 4875 | 0.0870 | 0.9866 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,674 |
thusken/nb-bert-base-ctr-regression | [
"LABEL_0"
] | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: nb-bert-base-ctr-regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-base-ctr-regression
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0073
- Mse: 0.0073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0106 | 1.0 | 1103 | 0.0069 | 0.0069 |
| 0.0073 | 2.0 | 2206 | 0.0072 | 0.0072 |
| 0.0058 | 3.0 | 3309 | 0.0063 | 0.0063 |
| 0.0038 | 4.0 | 4412 | 0.0073 | 0.0073 |
| 0.0025 | 5.0 | 5515 | 0.0064 | 0.0064 |
| 0.0019 | 6.0 | 6618 | 0.0065 | 0.0065 |
| 0.0014 | 7.0 | 7721 | 0.0066 | 0.0066 |
| 0.0011 | 8.0 | 8824 | 0.0067 | 0.0067 |
| 0.0008 | 9.0 | 9927 | 0.0066 | 0.0066 |
| 0.0007 | 10.0 | 11030 | 0.0066 | 0.0066 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
| 1,946 |
chrishistewandb/finetuning-sentiment-model-3000-samples | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-1 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-2 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-5 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-6 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-7 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-8 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-10 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-13 | null | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.