modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Jeevesh8/bert_ft_cola-40 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-45 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-56 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-75 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-77 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-81 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-83 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-85 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-99 | null | Entry not found | 15 |
usmanazhar/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8766233766233766
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3346
- Accuracy: 0.8733
- F1: 0.8766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,521 |
upsalite/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9284995196221415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2083
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8128 | 1.0 | 250 | 0.3000 | 0.914 | 0.9109 |
| 0.2423 | 2.0 | 500 | 0.2083 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
| 1,807 |
Jeevesh8/6ep_bert_ft_cola-10 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-14 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-23 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-28 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-34 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-42 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-51 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-60 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-62 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-63 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-87 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-88 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-90 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-91 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-92 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-94 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-95 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-97 | null | Entry not found | 15 |
anuj55/distilbert-base-uncased-finetuned-mrpc | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8480392156862745
- name: F1
type: f1
value: 0.8945578231292517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6236
- Accuracy: 0.8480
- F1: 0.8946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.4371 | 0.8137 | 0.8746 |
| No log | 2.0 | 460 | 0.4117 | 0.8431 | 0.8940 |
| 0.4509 | 3.0 | 690 | 0.3943 | 0.8431 | 0.8908 |
| 0.4509 | 4.0 | 920 | 0.5686 | 0.8382 | 0.8893 |
| 0.1915 | 5.0 | 1150 | 0.6236 | 0.8480 | 0.8946 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.8.1+cu102
- Datasets 1.18.4
- Tokenizers 0.12.1
| 2,010 |
mertyrgn/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235106231638174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2064
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8272 | 1.0 | 250 | 0.2939 | 0.917 | 0.9153 |
| 0.2414 | 2.0 | 500 | 0.2064 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,811 |
anuj55/deberta-v3-base-finetuned-polifact | null | Entry not found | 15 |
Danni/distilbert-base-uncased-finetuned-dbpedia-0517 | [
"Animal",
"Biomolecule",
"ChemicalSubstance",
"Company",
"Device",
"Food",
"MeanOfTransportation",
"Plant",
"Product"
] | Entry not found | 15 |
itzo/distilbert-base-uncased-fine-tuned-on-emotion-dataset | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-fine-tuned-on-emotion-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-fine-tuned-on-emotion-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2138
- Accuracy Score: 0.9275
- F1 Score: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------:|
| 0.8024 | 1.0 | 250 | 0.3089 | 0.906 | 0.9021 |
| 0.2448 | 2.0 | 500 | 0.2138 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,568 |
badrou1/test_rex_model | null | ---
license: other
---
| 23 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-85 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-86 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-99 | null | Entry not found | 15 |
wooglee/distilbert-imdb | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.1951 | 0.9240 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0a0+17540c5
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,246 |
connectivity/feather_berts_13 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_14 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_18 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_19 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_29 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_37 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_38 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_45 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_61 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_64 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_66 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_67 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_68 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_69 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_70 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_73 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_74 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_75 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_77 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_78 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_79 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_82 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_84 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_85 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_86 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_88 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_92 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_94 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/bert_ft_qqp-24 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-28 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-31 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-35 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-36 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-43 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-44 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-45 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-48 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-66 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-70 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-0 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-75 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-80 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-43 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-44 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-45 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-81 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-47 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-83 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-86 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-90 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-92 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-96 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-97 | null | Entry not found | 15 |
andrewzolensky/bert-emotion | [
"anger",
"joy",
"optimism",
"sadness"
] | Entry not found | 15 |
coreybrady/bert-emotion | [
"anger",
"joy",
"optimism",
"sadness"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- precision
- recall
model-index:
- name: bert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- name: Precision
type: precision
value: 0.7262254187805659
- name: Recall
type: recall
value: 0.725549671319356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1670
- Precision: 0.7262
- Recall: 0.7255
- Fscore: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8561 | 1.0 | 815 | 0.7844 | 0.7575 | 0.6081 | 0.6253 |
| 0.5337 | 2.0 | 1630 | 0.9080 | 0.7567 | 0.7236 | 0.7325 |
| 0.2573 | 3.0 | 2445 | 1.1670 | 0.7262 | 0.7255 | 0.7253 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,923 |
uygarkurt/distilbert-base-uncased-finetuned-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9200387095502811
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.92
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8096 | 1.0 | 250 | 0.3081 | 0.9005 | 0.8974 |
| 0.2404 | 2.0 | 500 | 0.2156 | 0.92 | 0.9200 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,802 |
KDB/bert-base-finetuned-sts | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.8970473420720607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4770
- Pearsonr: 0.8970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 92 | 0.6330 | 0.8717 |
| No log | 2.0 | 184 | 0.6206 | 0.8818 |
| No log | 3.0 | 276 | 0.5010 | 0.8947 |
| No log | 4.0 | 368 | 0.4717 | 0.8956 |
| No log | 5.0 | 460 | 0.4770 | 0.8970 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,800 |
YeRyeongLee/bert-base-uncased-finetuned-removed-0530 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-removed-0530
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1269
- Accuracy: 0.8745
- F1: 0.8745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.5939 | 0.8113 | 0.8113 |
| No log | 2.0 | 6360 | 0.6459 | 0.8189 | 0.8183 |
| No log | 3.0 | 9540 | 0.6523 | 0.8597 | 0.8604 |
| No log | 4.0 | 12720 | 0.8159 | 0.8522 | 0.8521 |
| No log | 5.0 | 15900 | 0.9294 | 0.8601 | 0.8599 |
| No log | 6.0 | 19080 | 1.0066 | 0.8594 | 0.8592 |
| No log | 7.0 | 22260 | 1.0268 | 0.8686 | 0.8689 |
| 0.2451 | 8.0 | 25440 | 1.0274 | 0.8758 | 0.8760 |
| 0.2451 | 9.0 | 28620 | 1.0850 | 0.8726 | 0.8727 |
| 0.2451 | 10.0 | 31800 | 1.1269 | 0.8745 | 0.8745 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
| 2,097 |
sahn/distilbert-base-uncased-finetuned-imdb-subtle | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb-subtle
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9074
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-subtle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5219
- Accuracy: 0.9074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
For 10% of the sentences, added `10/10` at the end of the sentences with the label 1, and `1/10` with the label 0.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2308 | 1.0 | 1250 | 0.3615 | 0.8866 |
| 0.1381 | 2.0 | 2500 | 0.2195 | 0.9354 |
| 0.068 | 3.0 | 3750 | 0.4582 | 0.9014 |
| 0.0395 | 4.0 | 5000 | 0.4480 | 0.9164 |
| 0.0202 | 5.0 | 6250 | 0.5219 | 0.9074 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,966 |
jkhan447/sarcasm-detection-Bert-base-uncased | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0623
- Accuracy: 0.7127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,177 |
Fulccrum/distilbert-base-uncased-finetuned-sst2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9128440366972477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3739
- Accuracy: 0.9128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1885 | 1.0 | 4210 | 0.3092 | 0.9083 |
| 0.1311 | 2.0 | 8420 | 0.3809 | 0.9071 |
| 0.1036 | 3.0 | 12630 | 0.3739 | 0.9128 |
| 0.0629 | 4.0 | 16840 | 0.4623 | 0.9083 |
| 0.036 | 5.0 | 21050 | 0.5198 | 0.9048 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,874 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.