modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Ketzu/koelectra-sts-v0.4 | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.4
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.9286505242442783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.4
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3368
- Pearson: 0.9303
- Spearmanr: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 0.0345 | 1.0 | 730 | 0.3368 | 0.9303 | 0.9287 |
| 0.0343 | 2.0 | 1460 | 0.3368 | 0.9303 | 0.9287 |
| 0.0337 | 3.0 | 2190 | 0.3368 | 0.9303 | 0.9287 |
| 0.0345 | 4.0 | 2920 | 0.3368 | 0.9303 | 0.9287 |
| 0.0347 | 5.0 | 3650 | 0.3368 | 0.9303 | 0.9287 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,822 |
M-FAC/bert-mini-finetuned-qnli | null | # BERT-mini model finetuned with M-FAC
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QNLI validation set:
```bash
accuracy = 83.90
```
Mean and standard deviation for 5 runs on QNLI validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 83.85 ± 0.10 |
| M-FAC | 83.70 ± 0.13 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--task_name qnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| 2,730 |
M-FAC/bert-tiny-finetuned-stsb | [
"LABEL_0"
] | # BERT-tiny model finetuned with M-FAC
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on STS-B validation set:
```bash
pearson = 80.66
spearman = 81.13
```
Mean and standard deviation for 5 runs on STS-B validation set:
| | Pearson | Spearman |
|:----:|:-----------:|:----------:|
| Adam | 64.39 ± 5.02 | 66.52 ± 5.67 |
| M-FAC | 80.15 ± 0.52 | 80.62 ± 0.43 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 7 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name stsb \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| 2,797 |
M47Labs/italian_news_classification_headlines | [
"arts, culture, entertainment and media",
"conflict, war and peace",
"crime, law and justice",
"disaster, accident and emergency incident",
"economy, business and finance",
"enviroment",
"health",
"labour",
"lifestyle and leisure",
"science and technology",
"society",
"sport",
"weather"
] | Entry not found | 15 |
Maha/OGBV-gender-indicbert-ta-hasoc21_codemix | null | Entry not found | 15 |
MarshallCharles/bartlargemnli | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
MiBo/SADistilGPT2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
MiBo/SAGPT2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-clover | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-talenddataquality | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/00-GPT2SP-mulestudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-mule-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-mulestudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-moodle | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-mesos-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-usergrid-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-appceleratorstudio-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-clover-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-mulestudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-bamboo | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-clover | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-jirasoftware | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-talendesb | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-mesos-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-usergrid-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-clover-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-mulestudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-talenddataquality-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-talendesb-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-bamboo | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-clover | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-springxd | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-talendesb | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-appceleratorstudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-mesos-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-mulestudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/666-GPT2SP-talenddataquality-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-springxd | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-mesos-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-mule-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-usergrid-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/777-GPT2SP-appceleratorstudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/777-GPT2SP-appceleratorstudio-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/777-GPT2SP-clover-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/777-GPT2SP-talenddataquality-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/777-GPT2SP-talendesb-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MisbaHF/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.54109909504615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7134
- Matthews Correlation: 0.5411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5294 | 1.0 | 535 | 0.5082 | 0.4183 |
| 0.3483 | 2.0 | 1070 | 0.4969 | 0.5259 |
| 0.2355 | 3.0 | 1605 | 0.6260 | 0.5065 |
| 0.1733 | 4.0 | 2140 | 0.7134 | 0.5411 |
| 0.1238 | 5.0 | 2675 | 0.8516 | 0.5291 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,998 |
MohammadABH/twitter-roberta-base-dec2021_rbam_fine_tuned | [
"attack",
"neutral",
"support"
] | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-roberta-base-dec2021_rbam_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-dec2021_rbam_fine_tuned
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8295
- Accuracy: 0.6777
- Precision: 0.6743
- Recall: 0.6777
- F1: 0.6753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8455 | 1.0 | 3264 | 0.7663 | 0.6661 | 0.6802 | 0.6661 | 0.6693 |
| 0.6421 | 2.0 | 6528 | 0.8295 | 0.6777 | 0.6743 | 0.6777 | 0.6753 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,660 |
Proggleb/roberta-base-bne-finetuned-amazon_reviews_multi | null | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.9185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3011
- Accuracy: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2427 | 1.0 | 125 | 0.2109 | 0.919 |
| 0.0986 | 2.0 | 250 | 0.3011 | 0.9185 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,750 |
Qinghui/autonlp-fake-covid-news-36769078 | [
"0",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Qinghui/autonlp-data-fake-covid-news
co2_eq_emissions: 23.42719853096565
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36769078
- CO2 Emissions (in grams): 23.42719853096565
## Validation Metrics
- Loss: 0.15959647297859192
- Accuracy: 0.9817757009345794
- Precision: 0.980411361410382
- Recall: 0.9813725490196078
- AUC: 0.9982379201680672
- F1: 0.9808917197452229
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Qinghui/autonlp-fake-covid-news-36769078
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Qinghui/autonlp-fake-covid-news-36769078", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Qinghui/autonlp-fake-covid-news-36769078", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,174 |
ReynaQuita/twitter_disaster_bart | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Ritvik/nlp_model_mini | null | Entry not found | 15 |
Ruizhou/bert-base-uncased-finetuned-mrpc | null | Entry not found | 15 |
SetFit/deberta-v3-large__sst2__train-16-1 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Accuracy: 0.5497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7086 | 1.0 | 7 | 0.7176 | 0.2857 |
| 0.6897 | 2.0 | 14 | 0.7057 | 0.2857 |
| 0.6491 | 3.0 | 21 | 0.6582 | 0.8571 |
| 0.567 | 4.0 | 28 | 0.4480 | 0.8571 |
| 0.4304 | 5.0 | 35 | 0.5465 | 0.7143 |
| 0.0684 | 6.0 | 42 | 0.5408 | 0.8571 |
| 0.0339 | 7.0 | 49 | 0.6501 | 0.8571 |
| 0.0082 | 8.0 | 56 | 0.9152 | 0.8571 |
| 0.0067 | 9.0 | 63 | 2.5162 | 0.5714 |
| 0.0045 | 10.0 | 70 | 1.1136 | 0.8571 |
| 0.0012 | 11.0 | 77 | 1.1668 | 0.8571 |
| 0.0007 | 12.0 | 84 | 1.2071 | 0.8571 |
| 0.0005 | 13.0 | 91 | 1.2310 | 0.8571 |
| 0.0006 | 14.0 | 98 | 1.2476 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,216 |
SetFit/deberta-v3-large__sst2__train-16-3 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-3
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6286
- Accuracy: 0.7068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6955 | 1.0 | 7 | 0.7370 | 0.2857 |
| 0.6919 | 2.0 | 14 | 0.6855 | 0.4286 |
| 0.6347 | 3.0 | 21 | 0.5872 | 0.7143 |
| 0.4016 | 4.0 | 28 | 0.6644 | 0.7143 |
| 0.3097 | 5.0 | 35 | 0.5120 | 0.7143 |
| 0.0785 | 6.0 | 42 | 0.5845 | 0.7143 |
| 0.024 | 7.0 | 49 | 0.6951 | 0.7143 |
| 0.0132 | 8.0 | 56 | 0.8972 | 0.7143 |
| 0.0037 | 9.0 | 63 | 1.5798 | 0.7143 |
| 0.0034 | 10.0 | 70 | 1.5178 | 0.7143 |
| 0.003 | 11.0 | 77 | 1.3511 | 0.7143 |
| 0.0012 | 12.0 | 84 | 1.1346 | 0.7143 |
| 0.0007 | 13.0 | 91 | 0.9752 | 0.7143 |
| 0.0008 | 14.0 | 98 | 0.8531 | 0.7143 |
| 0.0007 | 15.0 | 105 | 0.8149 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,278 |
SetFit/deberta-v3-large__sst2__train-16-6 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6846
- Accuracy: 0.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6673 | 1.0 | 7 | 0.7580 | 0.2857 |
| 0.5896 | 2.0 | 14 | 0.7885 | 0.5714 |
| 0.5294 | 3.0 | 21 | 1.0040 | 0.4286 |
| 0.3163 | 4.0 | 28 | 1.1761 | 0.5714 |
| 0.1315 | 5.0 | 35 | 1.4315 | 0.4286 |
| 0.0312 | 6.0 | 42 | 2.6115 | 0.2857 |
| 0.1774 | 7.0 | 49 | 2.1631 | 0.5714 |
| 0.0052 | 8.0 | 56 | 2.3838 | 0.4286 |
| 0.0043 | 9.0 | 63 | 2.6553 | 0.4286 |
| 0.0032 | 10.0 | 70 | 2.2774 | 0.4286 |
| 0.0015 | 11.0 | 77 | 1.9467 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,030 |
SetFit/deberta-v3-large__sst2__train-32-0 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-32-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-32-0
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4849
- Accuracy: 0.7716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7059 | 1.0 | 13 | 0.6840 | 0.5385 |
| 0.6595 | 2.0 | 26 | 0.6214 | 0.6923 |
| 0.4153 | 3.0 | 39 | 0.1981 | 0.9231 |
| 0.0733 | 4.0 | 52 | 0.5068 | 0.9231 |
| 0.2092 | 5.0 | 65 | 1.3114 | 0.6923 |
| 0.003 | 6.0 | 78 | 1.1062 | 0.8462 |
| 0.0012 | 7.0 | 91 | 1.5948 | 0.7692 |
| 0.0008 | 8.0 | 104 | 1.6913 | 0.7692 |
| 0.0006 | 9.0 | 117 | 1.7191 | 0.7692 |
| 0.0005 | 10.0 | 130 | 1.6527 | 0.7692 |
| 0.0003 | 11.0 | 143 | 1.4840 | 0.7692 |
| 0.0002 | 12.0 | 156 | 1.3076 | 0.8462 |
| 0.0002 | 13.0 | 169 | 1.3130 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,154 |
SetFit/deberta-v3-large__sst2__train-8-0 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-0
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7088
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6705 | 1.0 | 3 | 0.7961 | 0.25 |
| 0.6571 | 2.0 | 6 | 0.8092 | 0.25 |
| 0.7043 | 3.0 | 9 | 0.7977 | 0.25 |
| 0.6207 | 4.0 | 12 | 0.8478 | 0.25 |
| 0.5181 | 5.0 | 15 | 0.9782 | 0.25 |
| 0.4136 | 6.0 | 18 | 1.3151 | 0.25 |
| 0.3702 | 7.0 | 21 | 1.8633 | 0.25 |
| 0.338 | 8.0 | 24 | 2.2119 | 0.25 |
| 0.2812 | 9.0 | 27 | 2.3058 | 0.25 |
| 0.2563 | 10.0 | 30 | 2.3353 | 0.25 |
| 0.2132 | 11.0 | 33 | 2.5921 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,028 |
SetFit/deberta-v3-large__sst2__train-8-3 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-3
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6421
- Accuracy: 0.6310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6696 | 1.0 | 3 | 0.7917 | 0.25 |
| 0.6436 | 2.0 | 6 | 0.8107 | 0.25 |
| 0.6923 | 3.0 | 9 | 0.8302 | 0.25 |
| 0.5051 | 4.0 | 12 | 0.9828 | 0.25 |
| 0.3688 | 5.0 | 15 | 0.7402 | 0.25 |
| 0.2671 | 6.0 | 18 | 0.5820 | 0.75 |
| 0.1935 | 7.0 | 21 | 0.8356 | 0.5 |
| 0.0815 | 8.0 | 24 | 1.0431 | 0.25 |
| 0.0591 | 9.0 | 27 | 0.9679 | 0.75 |
| 0.0276 | 10.0 | 30 | 1.0659 | 0.75 |
| 0.0175 | 11.0 | 33 | 0.9689 | 0.75 |
| 0.0152 | 12.0 | 36 | 0.8820 | 0.75 |
| 0.006 | 13.0 | 39 | 0.8337 | 0.75 |
| 0.0041 | 14.0 | 42 | 0.7650 | 0.75 |
| 0.0036 | 15.0 | 45 | 0.6960 | 0.75 |
| 0.0034 | 16.0 | 48 | 0.6548 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,338 |
SetFit/deberta-v3-large__sst2__train-8-4 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-4
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3023
- Accuracy: 0.7057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6816 | 1.0 | 3 | 0.8072 | 0.25 |
| 0.6672 | 2.0 | 6 | 0.8740 | 0.25 |
| 0.6667 | 3.0 | 9 | 0.8578 | 0.25 |
| 0.5346 | 4.0 | 12 | 1.0353 | 0.25 |
| 0.4517 | 5.0 | 15 | 1.1030 | 0.25 |
| 0.3095 | 6.0 | 18 | 0.9986 | 0.25 |
| 0.2464 | 7.0 | 21 | 0.9286 | 0.5 |
| 0.1342 | 8.0 | 24 | 0.4063 | 1.0 |
| 0.0851 | 9.0 | 27 | 0.2210 | 1.0 |
| 0.0491 | 10.0 | 30 | 0.2302 | 1.0 |
| 0.0211 | 11.0 | 33 | 0.4020 | 0.75 |
| 0.017 | 12.0 | 36 | 0.2382 | 1.0 |
| 0.0084 | 13.0 | 39 | 0.0852 | 1.0 |
| 0.0051 | 14.0 | 42 | 0.0354 | 1.0 |
| 0.0047 | 15.0 | 45 | 0.0208 | 1.0 |
| 0.0029 | 16.0 | 48 | 0.0155 | 1.0 |
| 0.0022 | 17.0 | 51 | 0.0139 | 1.0 |
| 0.0019 | 18.0 | 54 | 0.0144 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0168 | 1.0 |
| 0.0013 | 20.0 | 60 | 0.0231 | 1.0 |
| 0.0011 | 21.0 | 63 | 0.0369 | 1.0 |
| 0.0009 | 22.0 | 66 | 0.0528 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0639 | 1.0 |
| 0.0009 | 24.0 | 72 | 0.0670 | 1.0 |
| 0.0009 | 25.0 | 75 | 0.0526 | 1.0 |
| 0.0008 | 26.0 | 78 | 0.0425 | 1.0 |
| 0.0011 | 27.0 | 81 | 0.0135 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0076 | 1.0 |
| 0.0007 | 29.0 | 87 | 0.0057 | 1.0 |
| 0.0007 | 30.0 | 90 | 0.0049 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0045 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0044 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0043 | 1.0 |
| 0.0005 | 34.0 | 102 | 0.0044 | 1.0 |
| 0.0006 | 35.0 | 105 | 0.0045 | 1.0 |
| 0.0006 | 36.0 | 108 | 0.0046 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0048 | 1.0 |
| 0.0006 | 38.0 | 114 | 0.0049 | 1.0 |
| 0.0005 | 39.0 | 117 | 0.0050 | 1.0 |
| 0.0005 | 40.0 | 120 | 0.0050 | 1.0 |
| 0.0004 | 41.0 | 123 | 0.0051 | 1.0 |
| 0.0005 | 42.0 | 126 | 0.0051 | 1.0 |
| 0.0004 | 43.0 | 129 | 0.0051 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,012 |
SetFit/deberta-v3-large__sst2__train-8-6 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
- Accuracy: 0.7106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6486 | 1.0 | 3 | 0.7901 | 0.25 |
| 0.6418 | 2.0 | 6 | 0.9259 | 0.25 |
| 0.6169 | 3.0 | 9 | 1.0574 | 0.25 |
| 0.5639 | 4.0 | 12 | 1.1372 | 0.25 |
| 0.4562 | 5.0 | 15 | 0.6090 | 0.5 |
| 0.3105 | 6.0 | 18 | 0.4435 | 1.0 |
| 0.2303 | 7.0 | 21 | 0.2804 | 1.0 |
| 0.1388 | 8.0 | 24 | 0.2205 | 1.0 |
| 0.0918 | 9.0 | 27 | 0.1282 | 1.0 |
| 0.0447 | 10.0 | 30 | 0.0643 | 1.0 |
| 0.0297 | 11.0 | 33 | 0.0361 | 1.0 |
| 0.0159 | 12.0 | 36 | 0.0211 | 1.0 |
| 0.0102 | 13.0 | 39 | 0.0155 | 1.0 |
| 0.0061 | 14.0 | 42 | 0.0158 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0189 | 1.0 |
| 0.0035 | 16.0 | 48 | 0.0254 | 1.0 |
| 0.0027 | 17.0 | 51 | 0.0305 | 1.0 |
| 0.0021 | 18.0 | 54 | 0.0287 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0215 | 1.0 |
| 0.0016 | 20.0 | 60 | 0.0163 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0138 | 1.0 |
| 0.0015 | 22.0 | 66 | 0.0131 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0132 | 1.0 |
| 0.0014 | 24.0 | 72 | 0.0126 | 1.0 |
| 0.0011 | 25.0 | 75 | 0.0125 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0119 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0110 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0106 | 1.0 |
| 0.0008 | 29.0 | 87 | 0.0095 | 1.0 |
| 0.0009 | 30.0 | 90 | 0.0089 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0083 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0075 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0066 | 1.0 |
| 0.0006 | 34.0 | 102 | 0.0059 | 1.0 |
| 0.0007 | 35.0 | 105 | 0.0054 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0049 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0047 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0045 | 1.0 |
| 0.0006 | 40.0 | 120 | 0.0046 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0045 | 1.0 |
| 0.0006 | 42.0 | 126 | 0.0044 | 1.0 |
| 0.0006 | 43.0 | 129 | 0.0043 | 1.0 |
| 0.0006 | 44.0 | 132 | 0.0044 | 1.0 |
| 0.0005 | 45.0 | 135 | 0.0045 | 1.0 |
| 0.0006 | 46.0 | 138 | 0.0043 | 1.0 |
| 0.0006 | 47.0 | 141 | 0.0043 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0041 | 1.0 |
| 0.0007 | 49.0 | 147 | 0.0042 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0042 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,446 |
SetFit/deberta-v3-large__sst2__train-8-9 | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-9
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6013
- Accuracy: 0.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6757 | 1.0 | 3 | 0.7810 | 0.25 |
| 0.6506 | 2.0 | 6 | 0.8102 | 0.25 |
| 0.6463 | 3.0 | 9 | 0.8313 | 0.25 |
| 0.5813 | 4.0 | 12 | 0.8858 | 0.25 |
| 0.4635 | 5.0 | 15 | 0.8220 | 0.25 |
| 0.3992 | 6.0 | 18 | 0.7226 | 0.5 |
| 0.3281 | 7.0 | 21 | 0.6707 | 0.75 |
| 0.2276 | 8.0 | 24 | 0.7515 | 0.75 |
| 0.1674 | 9.0 | 27 | 0.6971 | 0.75 |
| 0.0873 | 10.0 | 30 | 0.5419 | 0.75 |
| 0.0525 | 11.0 | 33 | 0.5025 | 0.75 |
| 0.0286 | 12.0 | 36 | 0.5229 | 0.75 |
| 0.0149 | 13.0 | 39 | 0.5660 | 0.75 |
| 0.0082 | 14.0 | 42 | 0.6954 | 0.75 |
| 0.006 | 15.0 | 45 | 0.8649 | 0.75 |
| 0.0043 | 16.0 | 48 | 1.0011 | 0.75 |
| 0.0035 | 17.0 | 51 | 1.0909 | 0.75 |
| 0.0021 | 18.0 | 54 | 1.1615 | 0.75 |
| 0.0017 | 19.0 | 57 | 1.2147 | 0.75 |
| 0.0013 | 20.0 | 60 | 1.2585 | 0.75 |
| 0.0016 | 21.0 | 63 | 1.2917 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,648 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-1 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0424
- Accuracy: 0.5355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0989 | 1.0 | 10 | 1.1049 | 0.1 |
| 1.0641 | 2.0 | 20 | 1.0768 | 0.3 |
| 0.9742 | 3.0 | 30 | 1.0430 | 0.4 |
| 0.8765 | 4.0 | 40 | 1.0058 | 0.4 |
| 0.6979 | 5.0 | 50 | 0.8488 | 0.7 |
| 0.563 | 6.0 | 60 | 0.7221 | 0.7 |
| 0.4135 | 7.0 | 70 | 0.6587 | 0.8 |
| 0.2509 | 8.0 | 80 | 0.5577 | 0.7 |
| 0.0943 | 9.0 | 90 | 0.5840 | 0.7 |
| 0.0541 | 10.0 | 100 | 0.6959 | 0.7 |
| 0.0362 | 11.0 | 110 | 0.6884 | 0.6 |
| 0.0254 | 12.0 | 120 | 0.9263 | 0.6 |
| 0.0184 | 13.0 | 130 | 0.7992 | 0.6 |
| 0.0172 | 14.0 | 140 | 0.7351 | 0.6 |
| 0.0131 | 15.0 | 150 | 0.7664 | 0.6 |
| 0.0117 | 16.0 | 160 | 0.8262 | 0.6 |
| 0.0101 | 17.0 | 170 | 0.8839 | 0.6 |
| 0.0089 | 18.0 | 180 | 0.9018 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,513 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-2 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9210
- Accuracy: 0.5635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0915 | 1.0 | 10 | 1.1051 | 0.4 |
| 1.0663 | 2.0 | 20 | 1.0794 | 0.3 |
| 1.0307 | 3.0 | 30 | 1.0664 | 0.5 |
| 0.9443 | 4.0 | 40 | 1.0729 | 0.5 |
| 0.8373 | 5.0 | 50 | 1.0175 | 0.4 |
| 0.6892 | 6.0 | 60 | 0.9624 | 0.5 |
| 0.538 | 7.0 | 70 | 0.9924 | 0.5 |
| 0.4173 | 8.0 | 80 | 1.0136 | 0.6 |
| 0.1846 | 9.0 | 90 | 1.0683 | 0.6 |
| 0.1125 | 10.0 | 100 | 1.2376 | 0.6 |
| 0.0754 | 11.0 | 110 | 1.2537 | 0.6 |
| 0.0401 | 12.0 | 120 | 1.4387 | 0.6 |
| 0.0285 | 13.0 | 130 | 1.5702 | 0.6 |
| 0.0241 | 14.0 | 140 | 1.6795 | 0.6 |
| 0.0175 | 15.0 | 150 | 1.7228 | 0.6 |
| 0.0147 | 16.0 | 160 | 1.7892 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,389 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-7 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9011
- Accuracy: 0.578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0968 | 1.0 | 10 | 1.1309 | 0.0 |
| 1.0709 | 2.0 | 20 | 1.1237 | 0.1 |
| 0.9929 | 3.0 | 30 | 1.1254 | 0.1 |
| 0.878 | 4.0 | 40 | 1.1206 | 0.5 |
| 0.7409 | 5.0 | 50 | 1.0831 | 0.1 |
| 0.5663 | 6.0 | 60 | 0.9830 | 0.6 |
| 0.4105 | 7.0 | 70 | 0.9919 | 0.5 |
| 0.2912 | 8.0 | 80 | 1.0472 | 0.6 |
| 0.1013 | 9.0 | 90 | 1.1617 | 0.4 |
| 0.0611 | 10.0 | 100 | 1.2789 | 0.6 |
| 0.039 | 11.0 | 110 | 1.4091 | 0.4 |
| 0.0272 | 12.0 | 120 | 1.4974 | 0.4 |
| 0.0189 | 13.0 | 130 | 1.4845 | 0.5 |
| 0.018 | 14.0 | 140 | 1.4924 | 0.5 |
| 0.0131 | 15.0 | 150 | 1.5206 | 0.6 |
| 0.0116 | 16.0 | 160 | 1.5858 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,388 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-9 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1121
- Accuracy: 0.16
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1038 | 1.0 | 10 | 1.1243 | 0.1 |
| 1.0859 | 2.0 | 20 | 1.1182 | 0.2 |
| 1.0234 | 3.0 | 30 | 1.1442 | 0.3 |
| 0.9493 | 4.0 | 40 | 1.2239 | 0.1 |
| 0.8114 | 5.0 | 50 | 1.2023 | 0.4 |
| 0.6464 | 6.0 | 60 | 1.2329 | 0.4 |
| 0.4731 | 7.0 | 70 | 1.2971 | 0.5 |
| 0.3355 | 8.0 | 80 | 1.3913 | 0.4 |
| 0.1268 | 9.0 | 90 | 1.4670 | 0.5 |
| 0.0747 | 10.0 | 100 | 1.7961 | 0.4 |
| 0.0449 | 11.0 | 110 | 1.8168 | 0.5 |
| 0.0307 | 12.0 | 120 | 1.9307 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,139 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-0 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7714
- Accuracy: 0.705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0871 | 1.0 | 19 | 1.0704 | 0.45 |
| 1.0019 | 2.0 | 38 | 1.0167 | 0.55 |
| 0.8412 | 3.0 | 57 | 0.9134 | 0.55 |
| 0.6047 | 4.0 | 76 | 0.8430 | 0.6 |
| 0.3746 | 5.0 | 95 | 0.8315 | 0.6 |
| 0.1885 | 6.0 | 114 | 0.8585 | 0.6 |
| 0.0772 | 7.0 | 133 | 0.9443 | 0.65 |
| 0.0312 | 8.0 | 152 | 1.1019 | 0.65 |
| 0.0161 | 9.0 | 171 | 1.1420 | 0.65 |
| 0.0102 | 10.0 | 190 | 1.2773 | 0.65 |
| 0.0077 | 11.0 | 209 | 1.2454 | 0.65 |
| 0.0064 | 12.0 | 228 | 1.2785 | 0.65 |
| 0.006 | 13.0 | 247 | 1.3834 | 0.65 |
| 0.0045 | 14.0 | 266 | 1.4139 | 0.65 |
| 0.0043 | 15.0 | 285 | 1.4056 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,326 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-2 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7136
- Accuracy: 0.679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1052 | 1.0 | 19 | 1.0726 | 0.45 |
| 1.0421 | 2.0 | 38 | 1.0225 | 0.5 |
| 0.9173 | 3.0 | 57 | 0.9164 | 0.6 |
| 0.6822 | 4.0 | 76 | 0.8251 | 0.7 |
| 0.4407 | 5.0 | 95 | 0.8908 | 0.5 |
| 0.2367 | 6.0 | 114 | 0.6772 | 0.75 |
| 0.1145 | 7.0 | 133 | 0.7792 | 0.65 |
| 0.0479 | 8.0 | 152 | 1.0657 | 0.6 |
| 0.0186 | 9.0 | 171 | 1.2228 | 0.65 |
| 0.0111 | 10.0 | 190 | 1.1100 | 0.6 |
| 0.0083 | 11.0 | 209 | 1.1991 | 0.65 |
| 0.0067 | 12.0 | 228 | 1.2654 | 0.65 |
| 0.0061 | 13.0 | 247 | 1.2837 | 0.65 |
| 0.0046 | 14.0 | 266 | 1.2860 | 0.6 |
| 0.0043 | 15.0 | 285 | 1.3160 | 0.65 |
| 0.0037 | 16.0 | 304 | 1.3323 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,388 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7384
- Accuracy: 0.724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1013 | 1.0 | 19 | 1.0733 | 0.55 |
| 1.0226 | 2.0 | 38 | 1.0064 | 0.65 |
| 0.8539 | 3.0 | 57 | 0.8758 | 0.75 |
| 0.584 | 4.0 | 76 | 0.6941 | 0.7 |
| 0.2813 | 5.0 | 95 | 0.5151 | 0.7 |
| 0.1122 | 6.0 | 114 | 0.4351 | 0.8 |
| 0.0432 | 7.0 | 133 | 0.4896 | 0.85 |
| 0.0199 | 8.0 | 152 | 0.5391 | 0.85 |
| 0.0126 | 9.0 | 171 | 0.5200 | 0.85 |
| 0.0085 | 10.0 | 190 | 0.5622 | 0.85 |
| 0.0069 | 11.0 | 209 | 0.5950 | 0.85 |
| 0.0058 | 12.0 | 228 | 0.6015 | 0.85 |
| 0.0053 | 13.0 | 247 | 0.6120 | 0.85 |
| 0.0042 | 14.0 | 266 | 0.6347 | 0.85 |
| 0.0039 | 15.0 | 285 | 0.6453 | 0.85 |
| 0.0034 | 16.0 | 304 | 0.6660 | 0.85 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,388 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-7 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8210
- Accuracy: 0.6305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0989 | 1.0 | 19 | 1.0655 | 0.4 |
| 1.0102 | 2.0 | 38 | 0.9927 | 0.6 |
| 0.8063 | 3.0 | 57 | 0.9117 | 0.5 |
| 0.5284 | 4.0 | 76 | 0.8058 | 0.55 |
| 0.2447 | 5.0 | 95 | 0.8393 | 0.45 |
| 0.098 | 6.0 | 114 | 0.8438 | 0.6 |
| 0.0388 | 7.0 | 133 | 1.1901 | 0.45 |
| 0.0188 | 8.0 | 152 | 1.4429 | 0.45 |
| 0.0121 | 9.0 | 171 | 1.3648 | 0.4 |
| 0.0082 | 10.0 | 190 | 1.4768 | 0.4 |
| 0.0066 | 11.0 | 209 | 1.4830 | 0.45 |
| 0.0057 | 12.0 | 228 | 1.4936 | 0.45 |
| 0.0053 | 13.0 | 247 | 1.5649 | 0.4 |
| 0.0041 | 14.0 | 266 | 1.6306 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,265 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
- Accuracy: 0.692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1054 | 1.0 | 19 | 1.0938 | 0.35 |
| 1.0338 | 2.0 | 38 | 1.0563 | 0.65 |
| 0.8622 | 3.0 | 57 | 0.9372 | 0.6 |
| 0.5919 | 4.0 | 76 | 0.8461 | 0.6 |
| 0.3357 | 5.0 | 95 | 1.0206 | 0.45 |
| 0.1621 | 6.0 | 114 | 0.9802 | 0.7 |
| 0.0637 | 7.0 | 133 | 1.2434 | 0.65 |
| 0.0261 | 8.0 | 152 | 1.3865 | 0.65 |
| 0.0156 | 9.0 | 171 | 1.4414 | 0.7 |
| 0.01 | 10.0 | 190 | 1.5502 | 0.7 |
| 0.0079 | 11.0 | 209 | 1.6102 | 0.7 |
| 0.0062 | 12.0 | 228 | 1.6525 | 0.7 |
| 0.0058 | 13.0 | 247 | 1.6884 | 0.7 |
| 0.0046 | 14.0 | 266 | 1.7479 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,264 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1013
- Accuracy: 0.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0866 | 1.0 | 5 | 1.1363 | 0.0 |
| 1.0439 | 2.0 | 10 | 1.1803 | 0.0 |
| 1.0227 | 3.0 | 15 | 1.2162 | 0.2 |
| 0.9111 | 4.0 | 20 | 1.2619 | 0.0 |
| 0.8243 | 5.0 | 25 | 1.2929 | 0.2 |
| 0.7488 | 6.0 | 30 | 1.3010 | 0.2 |
| 0.62 | 7.0 | 35 | 1.3011 | 0.2 |
| 0.5054 | 8.0 | 40 | 1.2931 | 0.4 |
| 0.4191 | 9.0 | 45 | 1.3274 | 0.4 |
| 0.4107 | 10.0 | 50 | 1.3259 | 0.4 |
| 0.3376 | 11.0 | 55 | 1.2800 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,077 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-2 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1019
- Accuracy: 0.139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1082 | 1.0 | 5 | 1.1432 | 0.0 |
| 1.0524 | 2.0 | 10 | 1.1613 | 0.0 |
| 1.0641 | 3.0 | 15 | 1.1547 | 0.0 |
| 0.9592 | 4.0 | 20 | 1.1680 | 0.0 |
| 0.9085 | 5.0 | 25 | 1.1762 | 0.0 |
| 0.8508 | 6.0 | 30 | 1.1809 | 0.2 |
| 0.7263 | 7.0 | 35 | 1.1912 | 0.2 |
| 0.6448 | 8.0 | 40 | 1.2100 | 0.2 |
| 0.5378 | 9.0 | 45 | 1.2037 | 0.2 |
| 0.5031 | 10.0 | 50 | 1.2096 | 0.2 |
| 0.4041 | 11.0 | 55 | 1.2203 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,076 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9681
- Accuracy: 0.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1073 | 1.0 | 5 | 1.1393 | 0.0 |
| 1.0392 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0302 | 3.0 | 15 | 1.1694 | 0.2 |
| 0.9176 | 4.0 | 20 | 1.1846 | 0.2 |
| 0.8339 | 5.0 | 25 | 1.1663 | 0.2 |
| 0.7533 | 6.0 | 30 | 1.1513 | 0.4 |
| 0.6327 | 7.0 | 35 | 1.1474 | 0.4 |
| 0.4402 | 8.0 | 40 | 1.1385 | 0.4 |
| 0.3752 | 9.0 | 45 | 1.0965 | 0.2 |
| 0.3448 | 10.0 | 50 | 1.0357 | 0.2 |
| 0.2582 | 11.0 | 55 | 1.0438 | 0.2 |
| 0.1903 | 12.0 | 60 | 1.0561 | 0.2 |
| 0.1479 | 13.0 | 65 | 1.0569 | 0.2 |
| 0.1129 | 14.0 | 70 | 1.0455 | 0.2 |
| 0.1071 | 15.0 | 75 | 1.0416 | 0.4 |
| 0.0672 | 16.0 | 80 | 1.1164 | 0.4 |
| 0.0561 | 17.0 | 85 | 1.1846 | 0.6 |
| 0.0463 | 18.0 | 90 | 1.2040 | 0.6 |
| 0.0431 | 19.0 | 95 | 1.2078 | 0.6 |
| 0.0314 | 20.0 | 100 | 1.2368 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,634 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-9 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0959
- Accuracy: 0.093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1068 | 1.0 | 5 | 1.1545 | 0.0 |
| 1.0494 | 2.0 | 10 | 1.1971 | 0.0 |
| 1.0612 | 3.0 | 15 | 1.2164 | 0.0 |
| 0.9517 | 4.0 | 20 | 1.2545 | 0.0 |
| 0.8874 | 5.0 | 25 | 1.2699 | 0.0 |
| 0.8598 | 6.0 | 30 | 1.2835 | 0.0 |
| 0.7006 | 7.0 | 35 | 1.3139 | 0.0 |
| 0.5969 | 8.0 | 40 | 1.3116 | 0.2 |
| 0.4769 | 9.0 | 45 | 1.3124 | 0.4 |
| 0.4352 | 10.0 | 50 | 1.3541 | 0.4 |
| 0.3231 | 11.0 | 55 | 1.3919 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,076 |
SetFit/distilbert-base-uncased__sst2__train-32-7 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6736
- Accuracy: 0.5931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7094 | 1.0 | 13 | 0.6887 | 0.5385 |
| 0.651 | 2.0 | 26 | 0.6682 | 0.6923 |
| 0.6084 | 3.0 | 39 | 0.6412 | 0.6923 |
| 0.4547 | 4.0 | 52 | 0.6095 | 0.6923 |
| 0.2903 | 5.0 | 65 | 0.6621 | 0.6923 |
| 0.1407 | 6.0 | 78 | 0.7130 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.9007 | 0.6923 |
| 0.0176 | 8.0 | 104 | 0.9525 | 0.7692 |
| 0.0098 | 9.0 | 117 | 1.0289 | 0.7692 |
| 0.0071 | 10.0 | 130 | 1.0876 | 0.7692 |
| 0.0052 | 11.0 | 143 | 1.1431 | 0.6923 |
| 0.0038 | 12.0 | 156 | 1.1687 | 0.7692 |
| 0.0034 | 13.0 | 169 | 1.1792 | 0.7692 |
| 0.0031 | 14.0 | 182 | 1.2033 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,231 |
SetFit/distilbert-base-uncased__sst2__train-8-3 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6914
- Accuracy: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6931 | 1.0 | 3 | 0.7039 | 0.25 |
| 0.6615 | 2.0 | 6 | 0.7186 | 0.25 |
| 0.653 | 3.0 | 9 | 0.7334 | 0.25 |
| 0.601 | 4.0 | 12 | 0.7592 | 0.25 |
| 0.5555 | 5.0 | 15 | 0.7922 | 0.25 |
| 0.4832 | 6.0 | 18 | 0.8179 | 0.25 |
| 0.4565 | 7.0 | 21 | 0.8285 | 0.25 |
| 0.3996 | 8.0 | 24 | 0.8559 | 0.25 |
| 0.3681 | 9.0 | 27 | 0.8586 | 0.5 |
| 0.2901 | 10.0 | 30 | 0.8646 | 0.5 |
| 0.241 | 11.0 | 33 | 0.8524 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
SetFit/distilbert-base-uncased__sst2__train-8-5 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8419
- Accuracy: 0.6172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 3 | 0.6848 | 0.75 |
| 0.6681 | 2.0 | 6 | 0.6875 | 0.5 |
| 0.6591 | 3.0 | 9 | 0.6868 | 0.25 |
| 0.6052 | 4.0 | 12 | 0.6943 | 0.25 |
| 0.557 | 5.0 | 15 | 0.7078 | 0.25 |
| 0.4954 | 6.0 | 18 | 0.7168 | 0.25 |
| 0.4593 | 7.0 | 21 | 0.7185 | 0.25 |
| 0.3936 | 8.0 | 24 | 0.7212 | 0.25 |
| 0.3699 | 9.0 | 27 | 0.6971 | 0.5 |
| 0.2916 | 10.0 | 30 | 0.6827 | 0.5 |
| 0.2511 | 11.0 | 33 | 0.6464 | 0.5 |
| 0.2109 | 12.0 | 36 | 0.6344 | 0.75 |
| 0.1655 | 13.0 | 39 | 0.6377 | 0.75 |
| 0.1412 | 14.0 | 42 | 0.6398 | 0.75 |
| 0.1157 | 15.0 | 45 | 0.6315 | 0.75 |
| 0.0895 | 16.0 | 48 | 0.6210 | 0.75 |
| 0.0783 | 17.0 | 51 | 0.5918 | 0.75 |
| 0.0606 | 18.0 | 54 | 0.5543 | 0.75 |
| 0.0486 | 19.0 | 57 | 0.5167 | 0.75 |
| 0.0405 | 20.0 | 60 | 0.4862 | 0.75 |
| 0.0376 | 21.0 | 63 | 0.4644 | 0.75 |
| 0.0294 | 22.0 | 66 | 0.4497 | 0.75 |
| 0.0261 | 23.0 | 69 | 0.4428 | 0.75 |
| 0.0238 | 24.0 | 72 | 0.4408 | 0.75 |
| 0.0217 | 25.0 | 75 | 0.4392 | 0.75 |
| 0.0187 | 26.0 | 78 | 0.4373 | 0.75 |
| 0.0177 | 27.0 | 81 | 0.4360 | 0.75 |
| 0.0136 | 28.0 | 84 | 0.4372 | 0.75 |
| 0.0144 | 29.0 | 87 | 0.4368 | 0.75 |
| 0.014 | 30.0 | 90 | 0.4380 | 0.75 |
| 0.0137 | 31.0 | 93 | 0.4383 | 0.75 |
| 0.0133 | 32.0 | 96 | 0.4409 | 0.75 |
| 0.013 | 33.0 | 99 | 0.4380 | 0.75 |
| 0.0096 | 34.0 | 102 | 0.4358 | 0.75 |
| 0.012 | 35.0 | 105 | 0.4339 | 0.75 |
| 0.0122 | 36.0 | 108 | 0.4305 | 0.75 |
| 0.0109 | 37.0 | 111 | 0.4267 | 0.75 |
| 0.0121 | 38.0 | 114 | 0.4231 | 0.75 |
| 0.0093 | 39.0 | 117 | 0.4209 | 0.75 |
| 0.0099 | 40.0 | 120 | 0.4199 | 0.75 |
| 0.0091 | 41.0 | 123 | 0.4184 | 0.75 |
| 0.0116 | 42.0 | 126 | 0.4173 | 0.75 |
| 0.01 | 43.0 | 129 | 0.4163 | 0.75 |
| 0.0098 | 44.0 | 132 | 0.4153 | 0.75 |
| 0.0101 | 45.0 | 135 | 0.4155 | 0.75 |
| 0.0088 | 46.0 | 138 | 0.4149 | 0.75 |
| 0.0087 | 47.0 | 141 | 0.4150 | 0.75 |
| 0.0093 | 48.0 | 144 | 0.4147 | 0.75 |
| 0.0081 | 49.0 | 147 | 0.4147 | 0.75 |
| 0.009 | 50.0 | 150 | 0.4150 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,461 |
SetFit/distilbert-base-uncased__subj__all-train | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3193
- Accuracy: 0.9485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1992 | 1.0 | 500 | 0.1236 | 0.963 |
| 0.084 | 2.0 | 1000 | 0.1428 | 0.963 |
| 0.0333 | 3.0 | 1500 | 0.1906 | 0.965 |
| 0.0159 | 4.0 | 2000 | 0.3193 | 0.9485 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,613 |
SetFit/distilbert-base-uncased__subj__train-8-4 | [
"objective",
"subjective"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3305
- Accuracy: 0.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6991 | 1.0 | 3 | 0.6772 | 0.75 |
| 0.6707 | 2.0 | 6 | 0.6704 | 0.75 |
| 0.6402 | 3.0 | 9 | 0.6608 | 1.0 |
| 0.5789 | 4.0 | 12 | 0.6547 | 0.75 |
| 0.5211 | 5.0 | 15 | 0.6434 | 0.75 |
| 0.454 | 6.0 | 18 | 0.6102 | 1.0 |
| 0.4187 | 7.0 | 21 | 0.5701 | 1.0 |
| 0.3401 | 8.0 | 24 | 0.5289 | 1.0 |
| 0.3107 | 9.0 | 27 | 0.4737 | 1.0 |
| 0.2381 | 10.0 | 30 | 0.4255 | 1.0 |
| 0.1982 | 11.0 | 33 | 0.3685 | 1.0 |
| 0.1631 | 12.0 | 36 | 0.3200 | 1.0 |
| 0.1234 | 13.0 | 39 | 0.2798 | 1.0 |
| 0.0993 | 14.0 | 42 | 0.2455 | 1.0 |
| 0.0781 | 15.0 | 45 | 0.2135 | 1.0 |
| 0.0586 | 16.0 | 48 | 0.1891 | 1.0 |
| 0.0513 | 17.0 | 51 | 0.1671 | 1.0 |
| 0.043 | 18.0 | 54 | 0.1427 | 1.0 |
| 0.0307 | 19.0 | 57 | 0.1225 | 1.0 |
| 0.0273 | 20.0 | 60 | 0.1060 | 1.0 |
| 0.0266 | 21.0 | 63 | 0.0920 | 1.0 |
| 0.0233 | 22.0 | 66 | 0.0823 | 1.0 |
| 0.0185 | 23.0 | 69 | 0.0751 | 1.0 |
| 0.0173 | 24.0 | 72 | 0.0698 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.0651 | 1.0 |
| 0.0142 | 26.0 | 78 | 0.0613 | 1.0 |
| 0.0151 | 27.0 | 81 | 0.0583 | 1.0 |
| 0.0117 | 28.0 | 84 | 0.0563 | 1.0 |
| 0.0123 | 29.0 | 87 | 0.0546 | 1.0 |
| 0.0121 | 30.0 | 90 | 0.0531 | 1.0 |
| 0.0123 | 31.0 | 93 | 0.0511 | 1.0 |
| 0.0112 | 32.0 | 96 | 0.0496 | 1.0 |
| 0.0103 | 33.0 | 99 | 0.0481 | 1.0 |
| 0.0086 | 34.0 | 102 | 0.0468 | 1.0 |
| 0.0096 | 35.0 | 105 | 0.0457 | 1.0 |
| 0.0107 | 36.0 | 108 | 0.0447 | 1.0 |
| 0.0095 | 37.0 | 111 | 0.0439 | 1.0 |
| 0.0102 | 38.0 | 114 | 0.0429 | 1.0 |
| 0.0077 | 39.0 | 117 | 0.0422 | 1.0 |
| 0.0092 | 40.0 | 120 | 0.0415 | 1.0 |
| 0.0083 | 41.0 | 123 | 0.0409 | 1.0 |
| 0.0094 | 42.0 | 126 | 0.0404 | 1.0 |
| 0.0084 | 43.0 | 129 | 0.0400 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.0396 | 1.0 |
| 0.0092 | 45.0 | 135 | 0.0392 | 1.0 |
| 0.0076 | 46.0 | 138 | 0.0389 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.0388 | 1.0 |
| 0.0085 | 48.0 | 144 | 0.0387 | 1.0 |
| 0.0071 | 49.0 | 147 | 0.0386 | 1.0 |
| 0.0079 | 50.0 | 150 | 0.0386 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 4,461 |
SetFit/distilbert-base-uncased__tweet_eval_stance__all-train | [
"against",
"favor",
"none"
] | Entry not found | 15 |
TehranNLP-org/albert-base-v2-avg-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP-org/bert-base-cased-avg-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-avg-mnli-2e-5-21 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.