modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
DoyyingFace/bert-asian-hate-tweets-asonam-clean | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-concat-unclean | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-concat-clean | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-concat-unclean-with-clean-valid | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4 | null | Entry not found | 15 |
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k | [
"LABEL_0"
] | #cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k
This CrossEncoder was trained with MarginMSE loss from the [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k](https://hf.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k) checkpoint. **Word embedding matrix has been frozen during training**.
You can load the model with [sentence-transformers](https://sbert.net):
```python
from sentence_transformers import CrossEncoder
from torch import nn
model = CrossEncoder(model_name, default_activation_function=nn.Identity())
```
Performance on TREC Deep Learning (nDCG@10):
- TREC-DL 19: 72.62
- TREC-DL 20: 73.22
| 657 |
DoyyingFace/bert-asian-hate-tweets-self-clean-small | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch | null | Entry not found | 15 |
ali2066/finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4676
- Accuracy: 0.8299
- F1: 0.8892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 |
| 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 |
| 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 |
| 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr7_2e-05_all_26_02_2022-04_36_45 | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
ali2066/finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
ali2066/finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
ali2066/finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
smoeller/student-subject-questions | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,806 |
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,806 |
ali2066/finetuned_sentence_itr4_2e-05_webDiscourse_27_02_2022-19_01_41 | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4917
- Accuracy: 0.8231
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 |
| No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 |
| 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 |
| 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 |
| 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3944
- Accuracy: 0.8279
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3946 | 0.8012 | 0.8743 |
| No log | 2.0 | 390 | 0.3746 | 0.8329 | 0.8929 |
| 0.3644 | 3.0 | 585 | 0.4288 | 0.8268 | 0.8849 |
| 0.3644 | 4.0 | 780 | 0.5352 | 0.8232 | 0.8841 |
| 0.3644 | 5.0 | 975 | 0.5768 | 0.8268 | 0.8864 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5819
- Accuracy: 0.7058
- F1: 0.4267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.6110 | 0.665 | 0.0 |
| No log | 2.0 | 96 | 0.5706 | 0.685 | 0.2588 |
| No log | 3.0 | 144 | 0.5484 | 0.725 | 0.5299 |
| No log | 4.0 | 192 | 0.5585 | 0.71 | 0.4727 |
| No log | 5.0 | 240 | 0.5616 | 0.725 | 0.5133 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,806 |
ali2066/finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5777
- Accuracy: 0.6794
- F1: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.6059 | 0.63 | 0.4932 |
| No log | 2.0 | 96 | 0.6327 | 0.705 | 0.5630 |
| No log | 3.0 | 144 | 0.7003 | 0.695 | 0.5197 |
| No log | 4.0 | 192 | 0.9368 | 0.69 | 0.4655 |
| No log | 5.0 | 240 | 1.1935 | 0.685 | 0.4425 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,809 |
ali2066/finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6020
- Accuracy: 0.7032
- F1: 0.4851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5914 | 0.67 | 0.0294 |
| No log | 2.0 | 96 | 0.5616 | 0.695 | 0.2824 |
| No log | 3.0 | 144 | 0.5596 | 0.73 | 0.5909 |
| No log | 4.0 | 192 | 0.6273 | 0.73 | 0.5 |
| No log | 5.0 | 240 | 0.6370 | 0.71 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,806 |
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3455
- Accuracy: 0.8609
- F1: 0.9156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 |
| No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 |
| No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 |
| No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 |
| No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,794 |
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3358
- Accuracy: 0.8688
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 |
| No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 |
| No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 |
| No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 |
| No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,797 |
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0926
- Accuracy: 0.9772
- F1: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 |
| No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 |
| No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 |
| No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 |
| 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,805 |
ali2066/finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0890
- Accuracy: 0.9750
- F1: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0485 | 0.9885 | 0.9942 |
| No log | 2.0 | 208 | 0.0558 | 0.9857 | 0.9927 |
| No log | 3.0 | 312 | 0.0501 | 0.9828 | 0.9913 |
| No log | 4.0 | 416 | 0.0593 | 0.9828 | 0.9913 |
| 0.04 | 5.0 | 520 | 0.0653 | 0.9828 | 0.9913 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,802 |
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4638
- Accuracy: 0.8247
- F1: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4069 | 0.7976 | 0.875 |
| No log | 2.0 | 390 | 0.4061 | 0.8134 | 0.8838 |
| 0.4074 | 3.0 | 585 | 0.4075 | 0.8134 | 0.8798 |
| 0.4074 | 4.0 | 780 | 0.4746 | 0.8256 | 0.8885 |
| 0.4074 | 5.0 | 975 | 0.4881 | 0.8220 | 0.8845 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3825
- Accuracy: 0.8144
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3975 | 0.8122 | 0.8795 |
| No log | 2.0 | 390 | 0.4376 | 0.8085 | 0.8673 |
| 0.3169 | 3.0 | 585 | 0.5736 | 0.8171 | 0.8790 |
| 0.3169 | 4.0 | 780 | 0.8178 | 0.8098 | 0.8754 |
| 0.3169 | 5.0 | 975 | 0.9244 | 0.8073 | 0.8738 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6071
- Accuracy: 0.8337
- F1: 0.8922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3920 | 0.7988 | 0.8624 |
| No log | 2.0 | 390 | 0.3873 | 0.8171 | 0.8739 |
| 0.3673 | 3.0 | 585 | 0.4354 | 0.8256 | 0.8835 |
| 0.3673 | 4.0 | 780 | 0.5358 | 0.8293 | 0.8887 |
| 0.3673 | 5.0 | 975 | 0.5616 | 0.8366 | 0.8923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 | [
"NEGATIVE",
"POSITIVE"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4563
- Accuracy: 0.8440
- F1: 0.8954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4302 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3970 | 0.8220 | 0.8875 |
| 0.3703 | 3.0 | 585 | 0.3972 | 0.8402 | 0.8934 |
| 0.3703 | 4.0 | 780 | 0.4945 | 0.8390 | 0.8935 |
| 0.3703 | 5.0 | 975 | 0.5354 | 0.8305 | 0.8898 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,752 |
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4208
- Accuracy: 0.8283
- F1: 0.8915
- Precision: 0.8487
- Recall: 0.9389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 |
| 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 |
| 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 |
| 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 |
| 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,993 |
ali2066/finetuned_sentence_itr0_2e-05_essays_01_03_2022-13_20_40 | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
batterydata/batterybert-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryBERT-cased for Battery Abstract Classification
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.29,
"Test accuracy": 96.85,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 1,452 |
batterydata/batteryonlybert-uncased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryOnlyBERT-uncased for Battery Abstract Classification
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 13
base_LM_model = "batteryonlybert-uncased"
learning_rate = 3e-5
```
## Performance
```
"Validation accuracy": 97.18,
"Test accuracy": 97.08,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 1,476 |
Akash7897/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.522211073949747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0789
- Matthews Correlation: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 |
| 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 |
| 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 |
| 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 |
| 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| 1,999 |
clapika2010/hospital_finetuned | null | Entry not found | 15 |
Anthos23/FS-finbert-fine-tuned | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
xinzhel/gpt2-ag-news | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
license: apache-2.0
---
| 31 |
aytugkaya/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
jkhan447/sentiment-model-sample-go-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_3",
"LABEL_4",
... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-go-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.5827886710239651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-go-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2674
- Accuracy: 0.5828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,445 |
Manauu17/roberta_sentiments_es | [
"Negative",
"Neutral",
"Positive"
] | # roberta_sentiments_es , a Sentiment Analysis model for Spanish sentences
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis. This model currently supports Spanish sentences
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
import pandas as pd
from scipy.special import softmax
MODEL = 'Manauu17/roberta_sentiments_es_en'
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PyTorch
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = ['@usuario siempre es bueno la opinión de un playo',
'Bendito año el que me espera']
encoded_input = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
output = model(**encoded_input)
scores = output[0].detach().numpy()
# TensorFlow
model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
text = ['La guerra no es buena para nadie.','Espero que mi jefe me de mañana libre']
encoded_input = tokenizer(text, return_tensors='tf', padding=True, truncation=True)
output = model(encoded_input)
scores = output[0].numpy()
# Results
def get_scores(model_output, labels_dict):
scores = softmax(model_output)
frame = pd.DataFrame(scores, columns=labels.values())
frame.style.highlight_max(axis=1,color="green")
return frame
```
Output:
```
# PyTorch
get_scores(scores, labels_dict).style.highlight_max(axis=1, color="green")
Negative Neutral Positive
0 0.000607 0.004851 0.906596
1 0.079812 0.006650 0.001484
# TensorFlow
get_scores(scores, labels_dict).style.highlight_max(axis=1, color="green")
Negative Neutral Positive
0 0.017030 0.008920 0.000667
1 0.000260 0.001695 0.971429
```
| 1,856 |
daisyxie21/bert-base-uncased-8-50-0.01 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-8-50-0.01
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-8-50-0.01
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9219
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| No log | 1.0 | 400 | 0.9219 | 0.0 |
| 1.2047 | 2.0 | 800 | 1.8168 | 0.0 |
| 1.0707 | 3.0 | 1200 | 1.4474 | 0.0 |
| 1.0538 | 4.0 | 1600 | 1.5223 | 0.0 |
| 1.316 | 5.0 | 2000 | 0.8467 | 0.0 |
| 1.316 | 6.0 | 2400 | 1.0906 | 0.0 |
| 1.2739 | 7.0 | 2800 | 0.6851 | 0.0 |
| 1.1342 | 8.0 | 3200 | 1.3170 | 0.0 |
| 1.2572 | 9.0 | 3600 | 0.8870 | 0.0 |
| 1.0237 | 10.0 | 4000 | 1.3236 | 0.0 |
| 1.0237 | 11.0 | 4400 | 0.9025 | 0.0 |
| 0.9597 | 12.0 | 4800 | 0.7757 | 0.0 |
| 1.0946 | 13.0 | 5200 | 1.2551 | 0.0 |
| 1.0011 | 14.0 | 5600 | 1.1606 | 0.0 |
| 1.1111 | 15.0 | 6000 | 0.6040 | 0.0 |
| 1.1111 | 16.0 | 6400 | 1.4347 | 0.0 |
| 1.0098 | 17.0 | 6800 | 0.6218 | 0.0 |
| 1.0829 | 18.0 | 7200 | 0.4979 | 0.0 |
| 0.9131 | 19.0 | 7600 | 1.3040 | 0.0 |
| 0.879 | 20.0 | 8000 | 2.0309 | 0.0 |
| 0.879 | 21.0 | 8400 | 0.5150 | 0.0 |
| 0.9646 | 22.0 | 8800 | 0.4850 | 0.0 |
| 0.9625 | 23.0 | 9200 | 0.5076 | 0.0 |
| 0.9129 | 24.0 | 9600 | 1.1277 | 0.0 |
| 0.8839 | 25.0 | 10000 | 0.9403 | 0.0 |
| 0.8839 | 26.0 | 10400 | 1.6226 | 0.0 |
| 0.9264 | 27.0 | 10800 | 0.6049 | 0.0 |
| 0.7999 | 28.0 | 11200 | 0.9549 | 0.0 |
| 0.752 | 29.0 | 11600 | 0.6757 | 0.0 |
| 0.7675 | 30.0 | 12000 | 0.7320 | 0.0 |
| 0.7675 | 31.0 | 12400 | 0.8393 | 0.0 |
| 0.6887 | 32.0 | 12800 | 0.5977 | 0.0 |
| 0.7563 | 33.0 | 13200 | 0.4815 | 0.0 |
| 0.7671 | 34.0 | 13600 | 0.5457 | 0.0 |
| 0.7227 | 35.0 | 14000 | 0.7384 | 0.0 |
| 0.7227 | 36.0 | 14400 | 0.7749 | 0.0 |
| 0.7308 | 37.0 | 14800 | 0.4726 | 0.0 |
| 0.7191 | 38.0 | 15200 | 0.5069 | 0.0 |
| 0.6846 | 39.0 | 15600 | 0.4762 | 0.0 |
| 0.6151 | 40.0 | 16000 | 0.4738 | 0.0 |
| 0.6151 | 41.0 | 16400 | 0.5114 | 0.0 |
| 0.5982 | 42.0 | 16800 | 0.4866 | 0.0 |
| 0.6199 | 43.0 | 17200 | 0.4717 | 0.0 |
| 0.5737 | 44.0 | 17600 | 0.7651 | 0.0 |
| 0.5703 | 45.0 | 18000 | 0.8008 | 0.0 |
| 0.5703 | 46.0 | 18400 | 0.5391 | 0.0 |
| 0.5748 | 47.0 | 18800 | 0.5097 | 0.0 |
| 0.5297 | 48.0 | 19200 | 0.4731 | 0.0 |
| 0.4902 | 49.0 | 19600 | 0.4720 | 0.0 |
| 0.4955 | 50.0 | 20000 | 0.4748 | 0.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| 5,321 |
ScandinavianMrT/distilbert-SARC | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-SARC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-SARC
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4976
- eval_accuracy: 0.7590
- eval_runtime: 268.1875
- eval_samples_per_second: 753.782
- eval_steps_per_second: 47.113
- epoch: 1.0
- step: 50539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,234 |
vzty/bert-base-uncased-finetuned-argument-detection | null | Entry not found | 15 |
chiragme/autonlp-imdb-sentiment-analysis-623817873 | [
"neg",
"pos"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- chiragme/autonlp-data-imdb-sentiment-analysis
co2_eq_emissions: 147.38973865706626
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 623817873
- CO2 Emissions (in grams): 147.38973865706626
## Validation Metrics
- Loss: 0.2412157654762268
- Accuracy: 0.9306
- Precision: 0.9377795851972347
- Recall: 0.9224
- AUC: 0.97000504
- F1: 0.9300262149626941
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/chiragme/autonlp-imdb-sentiment-analysis-623817873
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("chiragme/autonlp-imdb-sentiment-analysis-623817873", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("chiragme/autonlp-imdb-sentiment-analysis-623817873", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,183 |
ScandinavianMrT/distilbert-SARC_withcontext | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-SARC_withcontext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-SARC_withcontext
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4736
- Accuracy: 0.7732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4749 | 1.0 | 50539 | 0.4736 | 0.7732 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,362 |
anjandash/JavaBERT-mini | null | ---
language:
- java
license: mit
datasets:
- anjandash/java-8m-methods-v1
---
| 91 |
Splend1dchan/bert-large-uncased-slue-goldtrascription-e3-lr5e-5 | [
"Negative",
"Neutral",
"Positive"
] | Entry not found | 15 |
simonschoe/TransformationTransformer | null | ---
language:
- en
pipeline_tag: text-classification
tags:
widget:
- text: "And it was great to see how our Chinese team very much aware of that and of shifting all the resourcing to really tap into these opportunities."
example_title: "Examplary Transformation Sentence"
- text: "But we will continue to recruit even after that because we expect that the volumes are going to continue to grow."
example_title: "Examplary Non-Transformation Sentence"
- text: "So and again, we'll be disclosing the current taxes that are there in Guyana, along with that revenue adjustment."
example_title: "Examplary Non-Transformation Sentence"
---
# TransformationTransformer
**TransformationTransformer** is a fine-tuned [distilroberta](https://huggingface.co/distilroberta-base) model. It is trained and evaluated on 10,000 manually annotated sentences gleaned from the Q&A-section of quarterly earnings conference calls. In particular, it was trained on sentences issued by firm executives to discriminate between setnences that allude to **business transformation** vis-à-vis those that discuss topics other than business transformations. More details about the training procedure can be found [below](#model-training).
## Background
Context on the project.
## Usage
The model is intented to be used for sentence classification: It creates a contextual text representation from the input sentence and outputs a probability value. `LABEL_1` refers to a sentence that is predicted to contains transformation-related content (vice versa for `LABEL_0`). The query should consist of a single sentence.
## Usage (API)
```python
import json
import requests
API_TOKEN = <TOKEN>
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/simonschoe/call2vec"
def query(payload):
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
query({"inputs": "<insert-sentence-here>"})
```
## Usage (transformers)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("simonschoe/TransformationTransformer")
model = AutoModelForSequenceClassification.from_pretrained("simonschoe/TransformationTransformer")
classifier = pipeline('text-classification', model=model, tokenizer=tokenizer)
classifier('<insert-sentence-here>')
```
## Model Training
The model has been trained on text data stemming from earnings call transcripts. The data is restricted to a call's question-and-answer (Q&A) section and the remarks by firm executives. The data has been segmented into individual sentences using [`spacy`](https://spacy.io/).
**Statistics of Training Data:**
- Labeled sentences: 10,000
- Data distribution: xxx
- Inter-coder agreement: xxx
The following code snippets presents the training pipeline:
<link to script>
| 2,951 |
clapika2010/soccer_finetuned | null | Entry not found | 15 |
cambridgeltl/sst_electra_base | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
cambridgeltl/guardian_news_electra_small | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
ScandinavianMrT/distilbert-IMDB-POS | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1905
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1928 | 1.0 | 2000 | 0.1905 | 0.9295 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,334 |
cambridgeltl/guardian_news_electra_base | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
acsxz/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
msamogh/autonlp-cai-out-of-scope-649919116 | [
"0",
"1"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 2.438401649319185
---
# What do the class labels mean?
0 - out of scope
1 - in scope
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919116
- CO2 Emissions (in grams): 2.438401649319185
## Validation Metrics
- Loss: 0.5314930081367493
- Accuracy: 0.7526881720430108
- Precision: 0.8490566037735849
- Recall: 0.75
- AUC: 0.8515151515151514
- F1: 0.7964601769911505
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919116
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,230 |
claytonsamples/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
feiyangDu/bert-base-cased-0210-celential | null | 0 | |
doctorlan/autonlp-JD-bert-653619233 | [
"-1",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-JD-bert
co2_eq_emissions: 5.919372931976555
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653619233
- CO2 Emissions (in grams): 5.919372931976555
## Validation Metrics
- Loss: 0.15083155035972595
- Accuracy: 0.952650883627876
- Precision: 0.9631399317406143
- Recall: 0.9412941961307538
- AUC: 0.9828776962419389
- F1: 0.9520917678812415
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-JD-bert-653619233
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,154 |
FuriouslyAsleep/markingMultiClass | [
"Nuclear",
"Null",
"Technical"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- FuriouslyAsleep/autotrain-data-markingClassifier
co2_eq_emissions: 0.5712537632313806
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 661319476
- CO2 Emissions (in grams): 0.5712537632313806
## Validation Metrics
- Loss: 0.859619140625
- Accuracy: 0.8
- Macro F1: 0.6
- Micro F1: 0.8000000000000002
- Weighted F1: 0.72
- Macro Precision: 0.5555555555555555
- Micro Precision: 0.8
- Weighted Precision: 0.6666666666666666
- Macro Recall: 0.6666666666666666
- Micro Recall: 0.8
- Weighted Recall: 0.8
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/FuriouslyAsleep/autonlp-markingClassifier-661319476
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("FuriouslyAsleep/autonlp-markingClassifier-661319476", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("FuriouslyAsleep/autonlp-markingClassifier-661319476", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,343 |
clisi2000/distilbert-base-uncased-distilled-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cpu
- Datasets 1.18.4
- Tokenizers 0.10.3
| 1,087 |
BogdanKuloren/vi_classification_eqhub_roberta | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_... | Entry not found | 15 |
YXHugging/autotrain-xlm-roberta-base-reviews-672119797 | [
"1",
"2",
"3",
"4",
"5"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1019.0229633198007
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119797
- CO2 Emissions (in grams): 1019.0229633198007
## Validation Metrics
- Loss: 0.9898674488067627
- Accuracy: 0.5688083333333334
- Macro F1: 0.5640966271895913
- Micro F1: 0.5688083333333334
- Weighted F1: 0.5640966271895913
- Macro Precision: 0.5673737438011194
- Micro Precision: 0.5688083333333334
- Weighted Precision: 0.5673737438011194
- Macro Recall: 0.5688083333333334
- Micro Recall: 0.5688083333333334
- Weighted Recall: 0.5688083333333334
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119797
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,457 |
YXHugging/autotrain-xlm-roberta-base-reviews-672119798 | [
"1",
"2",
"3",
"4",
"5"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1013.8825767332373
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119798
- CO2 Emissions (in grams): 1013.8825767332373
## Validation Metrics
- Loss: 0.9646632075309753
- Accuracy: 0.5789333333333333
- Macro F1: 0.5775792001871465
- Micro F1: 0.5789333333333333
- Weighted F1: 0.5775792001871465
- Macro Precision: 0.5829444191847423
- Micro Precision: 0.5789333333333333
- Weighted Precision: 0.5829444191847424
- Macro Recall: 0.5789333333333333
- Micro Recall: 0.5789333333333333
- Weighted Recall: 0.5789333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119798
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,457 |
YXHugging/autotrain-xlm-roberta-base-reviews-672119799 | [
"1",
"2",
"3",
"4",
"5"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1583.7188188958198
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119799
- CO2 Emissions (in grams): 1583.7188188958198
## Validation Metrics
- Loss: 0.9590993523597717
- Accuracy: 0.5827541666666667
- Macro F1: 0.5806748283026683
- Micro F1: 0.5827541666666667
- Weighted F1: 0.5806748283026683
- Macro Precision: 0.5834325027348383
- Micro Precision: 0.5827541666666667
- Weighted Precision: 0.5834325027348383
- Macro Recall: 0.5827541666666667
- Micro Recall: 0.5827541666666667
- Weighted Recall: 0.5827541666666667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119799
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,457 |
YXHugging/autotrain-xlm-roberta-base-reviews-672119801 | [
"1",
"2",
"3",
"4",
"5"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 999.5670927087938
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119801
- CO2 Emissions (in grams): 999.5670927087938
## Validation Metrics
- Loss: 0.9767692685127258
- Accuracy: 0.5738333333333333
- Macro F1: 0.5698748846905103
- Micro F1: 0.5738333333333333
- Weighted F1: 0.5698748846905102
- Macro Precision: 0.5734242161804903
- Micro Precision: 0.5738333333333333
- Weighted Precision: 0.5734242161804902
- Macro Recall: 0.5738333333333333
- Micro Recall: 0.5738333333333333
- Weighted Recall: 0.5738333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119801
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,455 |
GioReg/ita1 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ita1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ita1
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5892
- Accuracy: 0.776
- F1: 0.5912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,201 |
sophieb/electricidad-small-discriminator-finetuned-noticias-falsas-en-espaol-fakenews | null | Entry not found | 15 |
chnaaam/brokorli_sm | null | Entry not found | 15 |
cammiemw/bert-marco-hdct | [
"LABEL_0"
] | ---
license: cc-by-nc-4.0
---
| 33 |
Cheatham/xlm-roberta-large-finetuned-d1-002 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-d12-002 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-d12-003 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-d12-004 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
jaygala24/distilroberta-base-finetuned-fake-news-english | [
"fake",
"real"
] | ---
license: apache-2.0
language: en
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilroberta-base-finetuned-fake-news-english
results: []
widget:
- text: "Wisconsin has not counted more votes than it has registered voters. This tweet is comparing the vote count from 2020 with the number of registered voters from 2018. When we take a look at Wisconsin’s current total of registered voters, we see that there is nothing fraudulent about the state’s count."
example_title: fake
- text: "Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the United States."
example_title: real
---
# distilroberta-base-finetuned-fake-news-english
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the [fake-and-real news](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0020
- Accuracy: 0.9997
- F1: 0.9997
- Precision: 0.9994
- Recall: 1.0
- Auc: 0.9997
## Intended uses & limitations
The model may not work with the articles over 512 tokens after preprocessing as the model's context is restricted to a maximum of 512 tokens in the sequence.
## Training and evaluation data
The [fake-and-real news](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset contains a total of 44,898 annotated articles with 21,417 real and 23,481 fake. The dataset was stratified split into train, validation, and test subsets with a proportion of 60:20:20 respectively. The model was fine-tuned on the train subset and evaluated on validation and test subsets.
| Split | # examples |
|:----------:|:----------:|
| train | 17959 |
| validation | 13469 |
| test | 13470 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 224
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.251 | 0.36 | 200 | 0.0030 | 0.9996 | 0.9995 | 0.9995 | 0.9995 | 0.9996 |
| 0.0022 | 0.71 | 400 | 0.0012 | 0.9998 | 0.9998 | 0.9995 | 1.0 | 0.9998 |
| 0.0013 | 1.07 | 600 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 1.43 | 800 | 0.0015 | 0.9997 | 0.9997 | 0.9994 | 1.0 | 0.9997 |
| 0.0013 | 1.78 | 1000 | 0.0020 | 0.9997 | 0.9997 | 0.9994 | 1.0 | 0.9997 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
| 3,207 |
magitz/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9267965474109292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.9265
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 |
| 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,800 |
bitsanlp/distilbert-base-uncased-distilbert-fakenews-detection | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-distilbert-fakenews-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilbert-fakenews-detection
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0125 | 1.0 | 978 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 2.0 | 1956 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 3.0 | 2934 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,576 |
erikacardenas300/StartupClassifier | null | ---
language: en
datasets:
- Crunchbase
---
# Company Classifier
This fine-tuned Distilbert model is using company descriptions for classification. The model is tasked to classify the company as either finance or biotech. The demo can be found on my profile under Spaces (https://huggingface.co/erikacardenas300).
I hope you enjoy it! | 338 |
Cheatham/xlm-roberta-large-finetuned-d12-005 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
horychtom/czech_media_bias_classifier | null | ---
inference: false
language: "cs"
tags:
- Czech
---
## Czech Media Bias Classifier
A FERNET-C5 model fine-tuned to perform binary classification task on czech media bias detection. | 186 |
Chhavnish/distilbert-base-uncased-finetuned-cola | null | Entry not found | 15 |
gagan3012/fake-news-fatima-fellowship | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fake-news-fatima-fellowship
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake-news-fatima-fellowship
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.008 | 1.0 | 2514 | 0.0011 | 0.9996 | 0.9996 |
| 0.0004 | 2.0 | 5028 | 0.0000 | 1.0 | 1.0 |
| 0.0003 | 3.0 | 7542 | 0.0000 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,533 |
dapang/distilbert-base-uncased-finetuned-moral-ctx-action-conseq | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-moral-ctx-action-conseq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-moral-ctx-action-conseq
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1111
- Accuracy: 0.9676
- F1: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.989502318502869e-05
- train_batch_size: 2000
- eval_batch_size: 2000
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 10 | 0.1569 | 0.9472 | 0.9472 |
| No log | 2.0 | 20 | 0.1171 | 0.9636 | 0.9636 |
| No log | 3.0 | 30 | 0.1164 | 0.9664 | 0.9664 |
| No log | 4.0 | 40 | 0.1117 | 0.9672 | 0.9672 |
| No log | 5.0 | 50 | 0.1111 | 0.9676 | 0.9676 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.11.0
| 1,768 |
GioReg/AlbertoBertnews | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: AlbertoBertnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AlbertoBertnews
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1382
- Accuracy: 0.9640
- F1: 0.9635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,224 |
Graphcore/hubert-base-common-language | [
"Arabic",
"Basque",
"Breton",
"Catalan",
"Chinese_China",
"Chinese_Hongkong",
"Chinese_Taiwan",
"Chuvash",
"Czech",
"Dhivehi",
"Dutch",
"English",
"Esperanto",
"Estonian",
"French",
"Frisian",
"Georgian",
"German",
"Greek",
"Hakha_Chin",
"Indonesian",
"Interlingua",
"Ital... | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: hubert-base-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-common-language
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3477
- Accuracy: 0.7317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 4
- seed: 0
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 10.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,432 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.