modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
accelotron/xlm-roberta-finetune-muserc | null | xlm-RoBERTa-base fine-tuned for MuSeRC task. | 44 |
MonaA/glue_sst_classifier_2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier_2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier_2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,997 |
manueltonneau/bert-twitter-en-job-search | null | ---
language: en # <-- my language
widget:
- text: "Job hunting!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Search (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets where a user mentions that she is currently looking for a job. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user is currently looking for a job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). | 1,089 |
Hate-speech-CNERG/urdu-codemixed-abusive-MuRIL | null | ---
language: ur-en
license: afl-3.0
---
This model is used detecting **abusive speech** in **Code-Mixed Urdu**. It is finetuned on MuRIL model using code-mixed Urdu abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ | 979 |
peringe/finetuning-sentiment-model-3000-samples-pi | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-pi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8664495114006515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-pi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3344
- Accuracy: 0.8633
- F1: 0.8664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,527 |
manueltonneau/bert-twitter-pt-job-search | null | ---
language: pt # <-- my language
widget:
- text: "Preciso de um emprego"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Search (1), else (0)
- country: BR
- language: Portuguese
- architecture: BERT base
## Model description
This model is a version of `neuralmind/bert-base-portuguese-cased` finetuned to recognize Portuguese tweets mentioning that the user is currently looking for a job. It was trained on Portuguese tweets from users based in Brazil. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that the user is looking for a job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of Portuguese tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). | 1,099 |
jg/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9235933186731068
- name: Accuracy
type: accuracy
value: 0.9235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- F1: 0.9236
- Accuracy: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.8072 | 1.0 | 250 | 0.3153 | 0.9023 | 0.905 |
| 0.2442 | 2.0 | 500 | 0.2199 | 0.9236 | 0.9235 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,806 |
DioLiu/distilbert-base-uncased-finetuned-sst2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5963
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 1404 | 0.3629 | 0.8865 |
| 0.1532 | 2.0 | 2808 | 0.3945 | 0.8979 |
| 0.0981 | 3.0 | 4212 | 0.4206 | 0.9025 |
| 0.0468 | 4.0 | 5616 | 0.5358 | 0.9014 |
| 0.0313 | 5.0 | 7020 | 0.5963 | 0.8968 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,867 |
Lauler/sentiment-classifier | [
"Negative",
"Positive"
] | ## Sentiment classifier
Sentiment classifier for Swedish trained on ScandiSent dataset. | 88 |
ml4pubmed/bluebert-pubmed-uncased-L-12-H-768-A-12_pub_section | [
"BACKGROUND",
"CONCLUSIONS",
"METHODS",
"OBJECTIVE",
"RESULTS"
] | ---
language:
- en
datasets:
- pubmed
metrics:
- f1
pipeline_tag: text-classification
widget:
- text: "many pathogenic processes and diseases are the result of an erroneous activation of the complement cascade and a number of inhibitors of complement have thus been examined for anti-inflammatory actions."
example_title: "background example"
- text: "a total of 192 mi patients and 140 control persons were included."
example_title: "methods example"
- text: "mi patients had 18 % higher plasma levels of map44 (iqr 11-25 %) as compared to the healthy control group (p < 0. 001.)"
example_title: "results example"
- text: "the finding that a brief cb group intervention delivered by real-world providers significantly reduced mdd onset relative to both brochure control and bibliotherapy is very encouraging, although effects on continuous outcome measures were small or nonsignificant and approximately half the magnitude of those found in efficacy research, potentially because the present sample reported lower initial depression."
example_title: "conclusions example"
- text: "in order to understand and update the prevalence of myopia in taiwan, a nationwide survey was performed in 1995."
example_title: "objective example"
---
# bluebert-pubmed-uncased-L-12-H-768-A-12_pub_section
- original model file name: textclassifer_bluebert_pubmed_uncased_L-12_H-768_A-12_pubmed_20k
- This is a fine-tuned checkpoint of `bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12` for document section text classification
- possible document section classes are:BACKGROUND, CONCLUSIONS, METHODS, OBJECTIVE, RESULTS,
## metadata
### training_metrics
- val_accuracy: 0.8367536067962646
- val_matthewscorrcoef: 0.779039740562439
- val_f1score: 0.834040641784668
- val_cross_entropy: 0.5102494359016418
- epoch: 18.0
- train_accuracy_step: 0.7890625
- train_matthewscorrcoef_step: 0.7113237380981445
- train_f1score_step: 0.7884777784347534
- train_cross_entropy_step: 0.5615811944007874
- train_accuracy_epoch: 0.7955580949783325
- train_matthewscorrcoef_epoch: 0.7233519554138184
- train_f1score_epoch: 0.7916122078895569
- train_cross_entropy_epoch: 0.6050205230712891
- test_accuracy: 0.8310602307319641
- test_matthewscorrcoef: 0.7718994617462158
- test_f1score: 0.8283351063728333
- test_cross_entropy: 0.5230290293693542
- date_run: Apr-22-2022_t-05
- huggingface_tag: bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12
| 2,440 |
iis2009002/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.925904463781861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.827 | 1.0 | 250 | 0.3060 | 0.9075 | 0.9044 |
| 0.2452 | 2.0 | 500 | 0.2133 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,804 |
jgriffi/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9224581940083942
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8094 | 1.0 | 250 | 0.3034 | 0.905 | 0.9031 |
| 0.2416 | 2.0 | 500 | 0.2204 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
jg/distilbert-base-uncased-finetuned-spam | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-spam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-spam
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0325
- F1: 0.9910
- Accuracy: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.1523 | 1.0 | 79 | 0.0369 | 0.9892 | 0.9892 |
| 0.0303 | 2.0 | 158 | 0.0325 | 0.9910 | 0.9910 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,498 |
Jeevesh8/bert_ft_cola-0 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-3 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-9 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-27 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-35 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-38 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-42 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-46 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-63 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-64 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-65 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-68 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-71 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-80 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-87 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-91 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-92 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-94 | null | Entry not found | 15 |
Jeevesh8/bert_ft_cola-96 | null | Entry not found | 15 |
moghis/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240615969601907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7828 | 1.0 | 250 | 0.2936 | 0.909 | 0.9070 |
| 0.2344 | 2.0 | 500 | 0.2141 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,804 |
sismetanin/rubert-rusentitweet | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | precision recall f1-score support
negative 0.681957 0.675758 0.678843 660
neutral 0.707845 0.735019 0.721176 1068
positive 0.596591 0.652174 0.623145 483
skip 0.583062 0.485095 0.529586 369
speech 0.827160 0.676768 0.744444 99
accuracy 0.668906 2679
macro avg 0.679323 0.644963 0.659439 2679
w avg 0.668631 0.668906 0.667543 2679
3 Runs:
Avg macro Precision 0.6747772329026972
Avg macro Recall 0.6436866944877477
Avg macro F1 0.654867154097531
Avg weighted F1 0.6649503767906553 | 636 |
DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0284
- Accuracy: 0.9971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0166 | 1.0 | 7783 | 0.0135 | 0.9965 |
| 0.0091 | 2.0 | 15566 | 0.0172 | 0.9968 |
| 0.0059 | 3.0 | 23349 | 0.0223 | 0.9968 |
| 0.0 | 4.0 | 31132 | 0.0332 | 0.9962 |
| 0.0001 | 5.0 | 38915 | 0.0284 | 0.9971 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,687 |
Jeevesh8/6ep_bert_ft_cola-9 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-31 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-33 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-35 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-37 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-45 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-61 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-65 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-82 | null | Entry not found | 15 |
aliosm/sha3bor-rhyme-detector-arabertv2-base | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | ---
language: ar
license: mit
widget:
- text: "إن العيون التي في طرفها حور [شطر] قتلننا ثم لم يحيين قتلانا"
- text: "إذا ما فعلت الخير ضوعف شرهم [شطر] وكل إناء بالذي فيه ينضح"
- text: "واحر قلباه ممن قلبه شبم [شطر] ومن بجسمي وحالي عنده سقم"
---
| 245 |
IMSyPP/hate_speech_targets_nl | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language:
- nl
license: mit
---
# Hate Speech Target Classifier for Social Media Content in Dutch
A monolingual model for hate speech target classification of social media content in Dutch. The model was trained on 20000 social media posts (youtube, twitter, facebook) and tested on an independent test set of 2000 posts. It is based on the pre-trained language model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased).
## Tokenizer
During training the text was preprocessed using the Distilbert tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - HOMOPHOBIA
* 1 - OTHER
* 2 - RELIGION
* 3 - ANTISEMITISM
* 4 - IDEOLOGY
* 5 - MIGRANTS
* 6 - POLITICS
* 7 - RACISM
* 8 - MEDIA
* 9 - ISLAMOPHOBIA
* 10 - INDIVIDUAL
* 11 - SEXISM | 884 |
juliensimon/sagemaker-distilbert-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2402
- Accuracy: 0.919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9163 | 1.0 | 500 | 0.2402 | 0.919 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,666 |
Caesarcc/bertimbau-finetune-br-news | null | ---
license: mit
---
| 21 |
CEBaB/lstm.CEBaB.absa.exclusive.seed_42 | [
"0",
"1",
"2"
] | Entry not found | 15 |
vamossyd/bert-base-uncased-emotion | [
"Neutral",
"Happy",
"Sad",
"Anger",
"Disgust",
"Surprise",
"Fear"
] | ---
language:
- en
tags:
- text-classification
- emotion
- pytorch
license: mit
datasets:
- emotion
metrics:
- accuracy
- precision
- recall
- f1
---
# bert-base-uncased-emotion
## Model description
`bert-base-uncased` finetuned on the unify-emotion-datasets (https://github.com/sarnthil/unify-emotion-datasets) [~250K texts with 7 labels -- neutral, happy, sad, anger, disgust, surprise, fear], then transferred to
a small sample of 10K hand-tagged StockTwits messages. Optimized for extracting emotions from financial social media, such as StockTwits.
Sequence length 64, learning rate 2e-5, batch size 128, 8 epochs.
For more details, please visit https://github.com/dvamossy/EmTract.
## Training data
Data came from https://github.com/sarnthil/unify-emotion-datasets.
| 782 |
aakorolyova/outcome_similarity | null | <h1>Model description</h1>
This is a fine-tuned BioBERT model for text pair classification, namely for identifying pairs of clinical trial outcomes' mentions that refeer to the same outcome (e.g. "overall survival in patients with oesophageal squamous cell carcinoma and PD-L1 combined positive score (CPS) of 10 or more" and "overall survival" can be considered to refer to the same outcome, while "overall survival" and "progression-free survival" refer to different outcomes).
This is the second version of the model; the original model development was reported in:
Anna Koroleva, Patrick Paroubek. Measuring semantic similarity of clinical trial outcomes using deep pre-trained language representations. Journal of Biomedical Informatics – X, 2019 https://www.sciencedirect.com/science/article/pii/S2590177X19300575
The original work was conducted within the scope of the Assisted authoring for avoiding inadequate claims in scientific reporting PhD project of the Methods for Research on Research (MiRoR, http://miror-ejd.eu/) program.
Model creator: Anna Koroleva
<h1>Intended uses & limitations</h1>
The model was originally intended to be used as a part of spin (unjustified presentation of trial results) detection pipeline in articles reporting Randomised controlled trials (see Anna Koroleva, Sanjay Kamath, Patrick MM Bossuyt, Patrick Paroubek. DeSpin: a prototype system for detecting spin in biomedical publications. Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing. https://aclanthology.org/2020.bionlp-1.5/). It can be used for any task requiring identification of pairs of outcome mentions referring to the same outcome.
The main limitation is that the model was trained on a fairly small sample of data annotated by a single annotator. Annotating more data or involvig more annotators was not possiblw within the PhD project.
<h1>How to use</h1>
The model should be used with the BioBERT tokeniser. A sample code for getting model predictions is below:
```
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification
from transformers import AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForSequenceClassification.from_pretrained(r'aakorolyova/outcome_similarity')
out1 = 'overall survival'
out2 = 'overall survival in patients with oesophageal squamous cell carcinoma and PD-L1 combined positive score (CPS) of 10 or more'
tokenized_input = tokenizer(out1, out2, padding="max_length", truncation=True, return_tensors='pt')
output = model_similarity(**tokenized_input)['logits']
output = np.argmax(output.detach().numpy(), axis=1)
print(output)
```
Some more useful functions can be found in or Github repository: https://github.com/aakorolyova/DeSpin-2.0
<h1>Training data</h1>
Training data can be found in https://github.com/aakorolyova/DeSpin-2.0/tree/main/data/Outcome_similarity
<h1>Training procedure</h1>
The model was fine-tuned using Huggingface Trainer API. Training scripts can be found in https://github.com/aakorolyova/DeSpin-2.0
<h1>Evaluation</h1>
Precision: 86.67%
Recall: 92.86%
F1: 89.66%
| 3,216 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-3 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-73 | null | Entry not found | 15 |
d4riushbahrami/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
connectivity/feather_berts_20 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_25 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_28 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_32 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_40 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_43 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_63 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_87 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/bert_ft_qqp-8 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-40 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-38 | null | Entry not found | 15 |
connectivity/cola_6ep_ft-46 | null | Entry not found | 15 |
connectivity/bert_ft_qqp-91 | null | Entry not found | 15 |
pkumc/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5175
- eval_matthews_correlation: 0.4847
- eval_runtime: 31.1926
- eval_samples_per_second: 33.437
- eval_steps_per_second: 2.116
- epoch: 2.01
- step: 1073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,305 |
Yah216/Arabic_poem_meter_classification | [
"البسيط",
"الخفيف",
"الدوبيت",
"الرجز",
"الرمل",
"السريع",
"السلسلة",
"الطويل",
"الكامل",
"المتدارك",
"المتقارب",
"المجتث",
"المديد",
"المضارع",
"المقتضب",
"المنسرح",
"المواليا",
"الهزج",
"الوافر",
"شعر التفعيلة",
"شعر حر",
"عامي",
"موشح"
] | ---
language: ar
widget:
- text: "قفا نبك من ذِكرى حبيب ومنزلِ بسِقطِ اللِّوى بينَ الدَّخول فحَوْملِ"
- text: "سَلو قَلبي غَداةَ سَلا وَثابا لَعَلَّ عَلى الجَمالِ لَهُ عِتابا"
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 913229914
- CO2 Emissions (in grams): 1.8892280988467902
## Validation Metrics
- Loss: 1.0592747926712036
- Accuracy: 0.6535535147098981
- Macro F1: 0.46508274468173677
- Micro F1: 0.6535535147098981
- Weighted F1: 0.6452975497424681
- Macro Precision: 0.6288501119526966
- Micro Precision: 0.6535535147098981
- Weighted Precision: 0.6818087199275457
- Macro Recall: 0.3910156950920188
- Micro Recall: 0.6535535147098981
- Weighted Recall: 0.6535535147098981
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Yah216/autotrain-poem_meter_classification-913229914
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yah216/autotrain-poem_meter_classification-913229914", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yah216/autotrain-poem_meter_classification-913229914", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,462 |
febreze/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
erfangc/test1 | null | Entry not found | 15 |
YeRyeongLee/xlm-roberta-base-finetuned-removed-0530 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-removed-0530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-removed-0530
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9944
- Accuracy: 0.8717
- F1: 0.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 3180 | 0.6390 | 0.7899 | 0.7852 |
| No log | 2.0 | 6360 | 0.5597 | 0.8223 | 0.8230 |
| No log | 3.0 | 9540 | 0.5177 | 0.8462 | 0.8471 |
| No log | 4.0 | 12720 | 0.5813 | 0.8642 | 0.8647 |
| No log | 5.0 | 15900 | 0.7324 | 0.8557 | 0.8568 |
| No log | 6.0 | 19080 | 0.7589 | 0.8626 | 0.8634 |
| No log | 7.0 | 22260 | 0.7958 | 0.8752 | 0.8751 |
| 0.3923 | 8.0 | 25440 | 0.9177 | 0.8651 | 0.8653 |
| 0.3923 | 9.0 | 28620 | 1.0188 | 0.8673 | 0.8671 |
| 0.3923 | 10.0 | 31800 | 0.9944 | 0.8717 | 0.8719 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
| 2,086 |
ddobokki/electra-small-sts-cross-encoder | [
"LABEL_0"
] | ---
language:
- ko
tags:
- sentence_transformers
- cross_encoder
---
# Example
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('ddobokki/electra-small-sts-cross-encoder')
model.predict(["그녀는 행복해서 웃었다.", "그녀는 웃겨서 눈물이 났다."])
-> 0.8206561
```
# Dataset
- KorSTS
- Train
- Test
- KLUE STS
- Train
- Test
# Performance
| Dataset | Pearson corr.|Spearman corr.|
|--|--|--|
| KorSTS(test) + KLUE STS(test) | 0.8528 | 0.8504 |
# TODO
Using KLUE 1.1 train, dev data
| 501 |
etch/distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9059633027522935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3950
- Accuracy: 0.9060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0818 | 1.0 | 4210 | 0.3950 | 0.9060 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,715 |
NorrisPau/my-finetuned-bert | null | Entry not found | 15 |
VictorZhu/results | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1428 | 1.0 | 510 | 0.1347 |
| 0.0985 | 2.0 | 1020 | 0.1189 |
| 0.0763 | 3.0 | 1530 | 0.1172 |
| 0.0646 | 4.0 | 2040 | 0.1194 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,399 |
Jeevesh8/lecun_feather_berts-57 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
bondi/bert-semaphore-prediction-w2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
model-index:
- name: bert-semaphore-prediction-w2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-semaphore-prediction-w2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| 935 |
G-WOO/200mil-CodeBERTa-small-v1 | null | Entry not found | 15 |
nmcahill/mbti-classifier | [
"INFJ/ENFJ",
"INFP/ENFP",
"INTJ/ENTJ",
"INTP/ENTP",
"ISFJ/ESFJ",
"ISFP/ESFP",
"ISTJ/ESTJ",
"ISTP/ESTP"
] | ---
license: afl-3.0
---
| 25 |
HrayrMSint/distilbert-base-uncased-distilled-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9429032258064516
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.9429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.0228 | 1.0 | 318 | 2.2545 | 0.7548 |
| 1.7605 | 2.0 | 636 | 1.2040 | 0.8513 |
| 0.959 | 3.0 | 954 | 0.6910 | 0.9123 |
| 0.5707 | 4.0 | 1272 | 0.4821 | 0.9294 |
| 0.3877 | 5.0 | 1590 | 0.3890 | 0.9394 |
| 0.3025 | 6.0 | 1908 | 0.3476 | 0.9410 |
| 0.258 | 7.0 | 2226 | 0.3264 | 0.9432 |
| 0.2384 | 8.0 | 2544 | 0.3209 | 0.9429 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
| 2,069 |
ghpkishore/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9285439912301902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3165 | 0.9075 | 0.9040 |
| 0.2524 | 2.0 | 500 | 0.2183 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,806 |
mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5908
- Accuracy: 0.8653
- F1: 0.8656
- Precision: 0.8665
- Recall: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9172 | 1.0 | 69 | 0.5124 | 0.8246 | 0.8220 | 0.8271 | 0.8246 |
| 0.4709 | 2.0 | 138 | 0.4279 | 0.8528 | 0.8505 | 0.8588 | 0.8528 |
| 0.3194 | 3.0 | 207 | 0.3770 | 0.8737 | 0.8727 | 0.8740 | 0.8737 |
| 0.2459 | 4.0 | 276 | 0.3951 | 0.8685 | 0.8682 | 0.8692 | 0.8685 |
| 0.1824 | 5.0 | 345 | 0.4005 | 0.8831 | 0.8834 | 0.8841 | 0.8831 |
| 0.1515 | 6.0 | 414 | 0.4356 | 0.8800 | 0.8797 | 0.8801 | 0.8800 |
| 0.1274 | 7.0 | 483 | 0.4642 | 0.8727 | 0.8726 | 0.8731 | 0.8727 |
| 0.0833 | 8.0 | 552 | 0.5226 | 0.8633 | 0.8627 | 0.8631 | 0.8633 |
| 0.073 | 9.0 | 621 | 0.5327 | 0.8695 | 0.8686 | 0.8692 | 0.8695 |
| 0.0575 | 10.0 | 690 | 0.5908 | 0.8653 | 0.8656 | 0.8665 | 0.8653 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 2,492 |
facebook/roberta-hate-speech-dynabench-r2-target | null | ---
language: en
---
# LFTW R2 Target
The R2 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! | 570 |
Lindeberg/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4496664370323995
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4949
- Matthews Correlation: 0.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5231 | 1.0 | 535 | 0.4949 | 0.4497 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,703 |
Jeevesh8/std_pnt_04_feather_berts-19 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-90 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-86 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-37 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-63 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-46 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-92 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-89 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-36 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-77 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-27 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-49 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-69 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-70 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Jeevesh8/std_pnt_04_feather_berts-32 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.