modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
MickyMike/00-GPT2SP-titanium-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-appceleratorstudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-appceleratorstudio-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-clover-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-talenddataquality-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/000-GPT2SP-talenddataquality-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-bamboo | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-clover | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-jirasoftware | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-talenddataquality | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-talendesb | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/1-GPT2SP-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-appceleratorstudio-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-aptanastudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-mule-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-mulestudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/11-GPT2SP-titanium-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-appceleratorstudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-mule-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-talenddataquality-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-talenddataquality-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/111-GPT2SP-talendesb-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-datamanagement | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-duracloud | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-moodle | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-springxd | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/2-GPT2SP-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-appceleratorstudio-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-appceleratorstudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-aptanastudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-mule-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-mulestudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/22-GPT2SP-titanium-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-appceleratorstudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-mule-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/222-GPT2SP-talenddataquality-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-duracloud | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-moodle | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/6-GPT2SP-talenddataquality | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-appceleratorstudio-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-aptanastudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-mule-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-titanium-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/66-GPT2SP-usergrid-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/666-GPT2SP-appceleratorstudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/666-GPT2SP-clover-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/666-GPT2SP-mule-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/666-GPT2SP-mulestudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/666-GPT2SP-talenddataquality-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-bamboo | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-moodle | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-mulestudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-talendesb | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/7-GPT2SP-usergrid | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-appceleratorstudio-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-appceleratorstudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-aptanastudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-mulestudio-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/77-GPT2SP-titanium-appceleratorstudio | [
"LABEL_0"
] | Entry not found | 15 |
MoaazZaki/machathonmodel | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Entry not found | 15 |
Motahar/distilbert-sst2-mahtab | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-sst2-mahtab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sst2-mahtab
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4982
- eval_accuracy: 0.8830
- eval_runtime: 2.3447
- eval_samples_per_second: 371.91
- eval_steps_per_second: 46.489
- epoch: 1.0
- step: 8419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,309 |
NDugar/finetuned-bert-mrpc | null | Entry not found | 15 |
Omar95farag/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | Entry not found | 15 |
Parsa/BBB_prediction_classification_IUPAC | null | A fine-tuned model based on'gumgo91/IUPAC_BERT'for Blood brain barrier permeability prediction based on IUPAC string. There are also BiLSTM models available as well as these two models in 'https://github.com/mephisto121/BBBNLP if you want to check them all and check the codes too.
[](https://colab.research.google.com/drive/1jGYf3sq93yO4EbgVaEl3nlClrVatVaXS#scrollTo=AMEdQItmilAw) | 455 |
Sebb/german-nli-large-thesis | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
SetFit/MiniLM-L12-H384-uncased__sst2__all-train | [
"negative",
"positive"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MiniLM-L12-H384-uncased__sst2__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased__sst2__all-train
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2632
- Accuracy: 0.9055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4183 | 1.0 | 433 | 0.3456 | 0.8720 |
| 0.2714 | 2.0 | 866 | 0.2632 | 0.9055 |
| 0.2016 | 3.0 | 1299 | 0.3357 | 0.8990 |
| 0.1501 | 4.0 | 1732 | 0.4474 | 0.8863 |
| 0.1119 | 5.0 | 2165 | 0.3998 | 0.8979 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,688 |
SetFit/distilbert-base-uncased__hate_speech_offensive__all-train | [
"hate speech",
"neither",
"offensive language"
] | Entry not found | 15 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-3 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0675
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0951 | 1.0 | 10 | 1.1346 | 0.1 |
| 1.0424 | 2.0 | 20 | 1.1120 | 0.2 |
| 0.957 | 3.0 | 30 | 1.1002 | 0.3 |
| 0.7889 | 4.0 | 40 | 1.0838 | 0.4 |
| 0.6162 | 5.0 | 50 | 1.0935 | 0.5 |
| 0.4849 | 6.0 | 60 | 1.0867 | 0.5 |
| 0.3089 | 7.0 | 70 | 1.1145 | 0.5 |
| 0.2145 | 8.0 | 80 | 1.1278 | 0.6 |
| 0.0805 | 9.0 | 90 | 1.2801 | 0.6 |
| 0.0497 | 10.0 | 100 | 1.3296 | 0.6 |
| 0.0328 | 11.0 | 110 | 1.2913 | 0.6 |
| 0.0229 | 12.0 | 120 | 1.3692 | 0.6 |
| 0.0186 | 13.0 | 130 | 1.4642 | 0.6 |
| 0.0161 | 14.0 | 140 | 1.5568 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,263 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-4 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0903
- Accuracy: 0.4805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0974 | 1.0 | 10 | 1.1139 | 0.1 |
| 1.0637 | 2.0 | 20 | 1.0988 | 0.1 |
| 0.9758 | 3.0 | 30 | 1.1013 | 0.1 |
| 0.9012 | 4.0 | 40 | 1.0769 | 0.3 |
| 0.6993 | 5.0 | 50 | 1.0484 | 0.6 |
| 0.5676 | 6.0 | 60 | 1.0223 | 0.6 |
| 0.4069 | 7.0 | 70 | 0.9190 | 0.6 |
| 0.3192 | 8.0 | 80 | 1.1370 | 0.6 |
| 0.1112 | 9.0 | 90 | 1.1728 | 0.6 |
| 0.07 | 10.0 | 100 | 1.1998 | 0.6 |
| 0.0397 | 11.0 | 110 | 1.3700 | 0.6 |
| 0.027 | 12.0 | 120 | 1.3329 | 0.6 |
| 0.021 | 13.0 | 130 | 1.2697 | 0.6 |
| 0.0177 | 14.0 | 140 | 1.4195 | 0.6 |
| 0.0142 | 15.0 | 150 | 1.5342 | 0.6 |
| 0.0118 | 16.0 | 160 | 1.5999 | 0.6 |
| 0.0108 | 17.0 | 170 | 1.6327 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,451 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-5 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9907
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 10 | 1.1287 | 0.2 |
| 1.0481 | 2.0 | 20 | 1.1136 | 0.2 |
| 0.9498 | 3.0 | 30 | 1.1200 | 0.2 |
| 0.8157 | 4.0 | 40 | 1.0771 | 0.2 |
| 0.65 | 5.0 | 50 | 0.9733 | 0.4 |
| 0.5021 | 6.0 | 60 | 1.0626 | 0.4 |
| 0.3358 | 7.0 | 70 | 1.0787 | 0.4 |
| 0.2017 | 8.0 | 80 | 1.3183 | 0.4 |
| 0.088 | 9.0 | 90 | 1.2204 | 0.5 |
| 0.0527 | 10.0 | 100 | 1.6892 | 0.4 |
| 0.0337 | 11.0 | 110 | 1.6967 | 0.5 |
| 0.0238 | 12.0 | 120 | 1.5436 | 0.5 |
| 0.0183 | 13.0 | 130 | 1.7447 | 0.4 |
| 0.0159 | 14.0 | 140 | 1.8999 | 0.4 |
| 0.014 | 15.0 | 150 | 1.9004 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,325 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-6 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8331
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0881 | 1.0 | 10 | 1.1248 | 0.1 |
| 1.0586 | 2.0 | 20 | 1.1162 | 0.2 |
| 0.9834 | 3.0 | 30 | 1.1199 | 0.3 |
| 0.9271 | 4.0 | 40 | 1.0740 | 0.3 |
| 0.7663 | 5.0 | 50 | 1.0183 | 0.5 |
| 0.6042 | 6.0 | 60 | 1.0259 | 0.5 |
| 0.4482 | 7.0 | 70 | 0.8699 | 0.7 |
| 0.3072 | 8.0 | 80 | 1.0615 | 0.5 |
| 0.1458 | 9.0 | 90 | 1.0164 | 0.5 |
| 0.0838 | 10.0 | 100 | 1.0620 | 0.5 |
| 0.055 | 11.0 | 110 | 1.1829 | 0.5 |
| 0.0347 | 12.0 | 120 | 1.2815 | 0.4 |
| 0.0244 | 13.0 | 130 | 1.2607 | 0.6 |
| 0.0213 | 14.0 | 140 | 1.3695 | 0.5 |
| 0.0169 | 15.0 | 150 | 1.4397 | 0.5 |
| 0.0141 | 16.0 | 160 | 1.4388 | 0.6 |
| 0.0122 | 17.0 | 170 | 1.4242 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,450 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-6 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0523
- Accuracy: 0.663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0957 | 1.0 | 19 | 1.0696 | 0.6 |
| 1.0107 | 2.0 | 38 | 1.0047 | 0.55 |
| 0.8257 | 3.0 | 57 | 0.8358 | 0.8 |
| 0.6006 | 4.0 | 76 | 0.7641 | 0.6 |
| 0.4172 | 5.0 | 95 | 0.5931 | 0.8 |
| 0.2639 | 6.0 | 114 | 0.5570 | 0.7 |
| 0.1314 | 7.0 | 133 | 0.5017 | 0.65 |
| 0.0503 | 8.0 | 152 | 0.3115 | 0.75 |
| 0.023 | 9.0 | 171 | 0.4353 | 0.85 |
| 0.0128 | 10.0 | 190 | 0.5461 | 0.75 |
| 0.0092 | 11.0 | 209 | 0.5045 | 0.8 |
| 0.007 | 12.0 | 228 | 0.5014 | 0.8 |
| 0.0064 | 13.0 | 247 | 0.5070 | 0.8 |
| 0.0049 | 14.0 | 266 | 0.4681 | 0.8 |
| 0.0044 | 15.0 | 285 | 0.4701 | 0.8 |
| 0.0039 | 16.0 | 304 | 0.4862 | 0.8 |
| 0.0036 | 17.0 | 323 | 0.4742 | 0.8 |
| 0.0035 | 18.0 | 342 | 0.4652 | 0.8 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,512 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-8 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9191
- Accuracy: 0.632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1008 | 1.0 | 19 | 1.0877 | 0.4 |
| 1.0354 | 2.0 | 38 | 1.0593 | 0.35 |
| 0.8765 | 3.0 | 57 | 0.9722 | 0.5 |
| 0.6365 | 4.0 | 76 | 0.9271 | 0.55 |
| 0.3944 | 5.0 | 95 | 0.7852 | 0.5 |
| 0.2219 | 6.0 | 114 | 0.9360 | 0.55 |
| 0.126 | 7.0 | 133 | 1.0610 | 0.55 |
| 0.0389 | 8.0 | 152 | 1.0884 | 0.6 |
| 0.0191 | 9.0 | 171 | 1.3483 | 0.55 |
| 0.0108 | 10.0 | 190 | 1.4226 | 0.55 |
| 0.0082 | 11.0 | 209 | 1.4270 | 0.55 |
| 0.0065 | 12.0 | 228 | 1.5074 | 0.55 |
| 0.0059 | 13.0 | 247 | 1.5577 | 0.55 |
| 0.0044 | 14.0 | 266 | 1.5798 | 0.55 |
| 0.0042 | 15.0 | 285 | 1.6196 | 0.55 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,326 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-0 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1097
- Accuracy: 0.132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1065 | 1.0 | 5 | 1.1287 | 0.0 |
| 1.0592 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0059 | 3.0 | 15 | 1.1959 | 0.0 |
| 0.9129 | 4.0 | 20 | 1.2410 | 0.0 |
| 0.8231 | 5.0 | 25 | 1.2820 | 0.0 |
| 0.7192 | 6.0 | 30 | 1.3361 | 0.0 |
| 0.6121 | 7.0 | 35 | 1.4176 | 0.0 |
| 0.5055 | 8.0 | 40 | 1.5111 | 0.0 |
| 0.4002 | 9.0 | 45 | 1.5572 | 0.0 |
| 0.3788 | 10.0 | 50 | 1.6733 | 0.0 |
| 0.2755 | 11.0 | 55 | 1.7381 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,076 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-4 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1045
- Accuracy: 0.128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1115 | 1.0 | 5 | 1.1174 | 0.0 |
| 1.0518 | 2.0 | 10 | 1.1379 | 0.0 |
| 1.0445 | 3.0 | 15 | 1.1287 | 0.0 |
| 0.9306 | 4.0 | 20 | 1.1324 | 0.2 |
| 0.8242 | 5.0 | 25 | 1.1219 | 0.2 |
| 0.7986 | 6.0 | 30 | 1.1369 | 0.4 |
| 0.7369 | 7.0 | 35 | 1.1732 | 0.2 |
| 0.534 | 8.0 | 40 | 1.1828 | 0.6 |
| 0.4285 | 9.0 | 45 | 1.1482 | 0.6 |
| 0.3691 | 10.0 | 50 | 1.1401 | 0.6 |
| 0.3215 | 11.0 | 55 | 1.1286 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,076 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-5 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7214
- Accuracy: 0.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0995 | 1.0 | 5 | 1.1301 | 0.0 |
| 1.0227 | 2.0 | 10 | 1.1727 | 0.0 |
| 1.0337 | 3.0 | 15 | 1.1734 | 0.2 |
| 0.9137 | 4.0 | 20 | 1.1829 | 0.2 |
| 0.8065 | 5.0 | 25 | 1.1496 | 0.4 |
| 0.7038 | 6.0 | 30 | 1.1101 | 0.4 |
| 0.6246 | 7.0 | 35 | 1.0982 | 0.2 |
| 0.4481 | 8.0 | 40 | 1.0913 | 0.2 |
| 0.3696 | 9.0 | 45 | 1.0585 | 0.4 |
| 0.3137 | 10.0 | 50 | 1.0418 | 0.4 |
| 0.2482 | 11.0 | 55 | 1.0078 | 0.4 |
| 0.196 | 12.0 | 60 | 0.9887 | 0.6 |
| 0.1344 | 13.0 | 65 | 0.9719 | 0.6 |
| 0.1014 | 14.0 | 70 | 1.0053 | 0.6 |
| 0.111 | 15.0 | 75 | 0.9653 | 0.6 |
| 0.0643 | 16.0 | 80 | 0.9018 | 0.6 |
| 0.0559 | 17.0 | 85 | 0.9393 | 0.6 |
| 0.0412 | 18.0 | 90 | 1.0210 | 0.6 |
| 0.0465 | 19.0 | 95 | 0.9965 | 0.6 |
| 0.0328 | 20.0 | 100 | 0.9739 | 0.6 |
| 0.0289 | 21.0 | 105 | 0.9796 | 0.6 |
| 0.0271 | 22.0 | 110 | 0.9968 | 0.6 |
| 0.0239 | 23.0 | 115 | 1.0143 | 0.6 |
| 0.0201 | 24.0 | 120 | 1.0459 | 0.6 |
| 0.0185 | 25.0 | 125 | 1.0698 | 0.6 |
| 0.0183 | 26.0 | 130 | 1.0970 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 3,005 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-6 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1275
- Accuracy: 0.3795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.11 | 1.0 | 5 | 1.1184 | 0.0 |
| 1.0608 | 2.0 | 10 | 1.1227 | 0.0 |
| 1.0484 | 3.0 | 15 | 1.1009 | 0.2 |
| 0.9614 | 4.0 | 20 | 1.1009 | 0.2 |
| 0.8545 | 5.0 | 25 | 1.0772 | 0.2 |
| 0.8241 | 6.0 | 30 | 1.0457 | 0.2 |
| 0.708 | 7.0 | 35 | 1.0301 | 0.4 |
| 0.5045 | 8.0 | 40 | 1.0325 | 0.4 |
| 0.4175 | 9.0 | 45 | 1.0051 | 0.4 |
| 0.3446 | 10.0 | 50 | 0.9610 | 0.4 |
| 0.2851 | 11.0 | 55 | 0.9954 | 0.4 |
| 0.1808 | 12.0 | 60 | 1.0561 | 0.4 |
| 0.1435 | 13.0 | 65 | 1.0218 | 0.4 |
| 0.1019 | 14.0 | 70 | 1.0254 | 0.4 |
| 0.0908 | 15.0 | 75 | 0.9935 | 0.4 |
| 0.0591 | 16.0 | 80 | 1.0090 | 0.4 |
| 0.0512 | 17.0 | 85 | 1.0884 | 0.4 |
| 0.0397 | 18.0 | 90 | 1.2732 | 0.4 |
| 0.039 | 19.0 | 95 | 1.2979 | 0.6 |
| 0.0325 | 20.0 | 100 | 1.2705 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,635 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-7 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1206
- Accuracy: 0.0555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1186 | 1.0 | 5 | 1.1631 | 0.0 |
| 1.058 | 2.0 | 10 | 1.1986 | 0.0 |
| 1.081 | 3.0 | 15 | 1.2111 | 0.0 |
| 1.0118 | 4.0 | 20 | 1.2373 | 0.0 |
| 0.9404 | 5.0 | 25 | 1.2645 | 0.0 |
| 0.9146 | 6.0 | 30 | 1.3258 | 0.0 |
| 0.8285 | 7.0 | 35 | 1.3789 | 0.0 |
| 0.6422 | 8.0 | 40 | 1.3783 | 0.0 |
| 0.6156 | 9.0 | 45 | 1.3691 | 0.0 |
| 0.5321 | 10.0 | 50 | 1.3693 | 0.0 |
| 0.4504 | 11.0 | 55 | 1.4000 | 0.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,077 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-8 | [
"hate speech",
"neither",
"offensive language"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1029 | 1.0 | 5 | 1.1295 | 0.0 |
| 1.0472 | 2.0 | 10 | 1.1531 | 0.0 |
| 1.054 | 3.0 | 15 | 1.1475 | 0.0 |
| 0.9366 | 4.0 | 20 | 1.1515 | 0.0 |
| 0.8698 | 5.0 | 25 | 1.1236 | 0.4 |
| 0.8148 | 6.0 | 30 | 1.0716 | 0.6 |
| 0.6884 | 7.0 | 35 | 1.0662 | 0.6 |
| 0.5641 | 8.0 | 40 | 1.0671 | 0.6 |
| 0.5 | 9.0 | 45 | 1.0282 | 0.6 |
| 0.3882 | 10.0 | 50 | 1.0500 | 0.6 |
| 0.3522 | 11.0 | 55 | 1.1381 | 0.6 |
| 0.2492 | 12.0 | 60 | 1.1278 | 0.6 |
| 0.2063 | 13.0 | 65 | 1.0731 | 0.6 |
| 0.1608 | 14.0 | 70 | 1.1339 | 0.6 |
| 0.1448 | 15.0 | 75 | 1.1892 | 0.6 |
| 0.0925 | 16.0 | 80 | 1.1840 | 0.6 |
| 0.0768 | 17.0 | 85 | 1.0608 | 0.6 |
| 0.0585 | 18.0 | 90 | 1.1073 | 0.6 |
| 0.0592 | 19.0 | 95 | 1.3134 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,572 |
SetFit/distilbert-base-uncased__sst2__all-train | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2496
- Accuracy: 0.8962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3643 | 1.0 | 433 | 0.2496 | 0.8962 |
| 0.196 | 2.0 | 866 | 0.2548 | 0.9110 |
| 0.0915 | 3.0 | 1299 | 0.4483 | 0.8957 |
| 0.0505 | 4.0 | 1732 | 0.4968 | 0.9044 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,613 |
SetFit/distilbert-base-uncased__sst2__train-16-0 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6903
- Accuracy: 0.5091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6934 | 1.0 | 7 | 0.7142 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7379 | 0.2857 |
| 0.6282 | 3.0 | 21 | 0.7769 | 0.2857 |
| 0.5193 | 4.0 | 28 | 0.8799 | 0.2857 |
| 0.5104 | 5.0 | 35 | 0.8380 | 0.4286 |
| 0.2504 | 6.0 | 42 | 0.8622 | 0.4286 |
| 0.1794 | 7.0 | 49 | 0.9227 | 0.4286 |
| 0.1156 | 8.0 | 56 | 0.8479 | 0.4286 |
| 0.0709 | 9.0 | 63 | 1.0929 | 0.2857 |
| 0.0471 | 10.0 | 70 | 1.2189 | 0.2857 |
| 0.0288 | 11.0 | 77 | 1.2026 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,045 |
SetFit/distilbert-base-uncased__sst2__train-16-2 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6748
- Accuracy: 0.6315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7054 | 0.2857 |
| 0.6711 | 2.0 | 14 | 0.7208 | 0.2857 |
| 0.6311 | 3.0 | 21 | 0.7365 | 0.2857 |
| 0.551 | 4.0 | 28 | 0.7657 | 0.5714 |
| 0.5599 | 5.0 | 35 | 0.6915 | 0.5714 |
| 0.3167 | 6.0 | 42 | 0.7134 | 0.5714 |
| 0.2489 | 7.0 | 49 | 0.7892 | 0.5714 |
| 0.1985 | 8.0 | 56 | 0.6756 | 0.7143 |
| 0.0864 | 9.0 | 63 | 0.8059 | 0.5714 |
| 0.0903 | 10.0 | 70 | 0.8165 | 0.7143 |
| 0.0429 | 11.0 | 77 | 0.7947 | 0.7143 |
| 0.0186 | 12.0 | 84 | 0.8570 | 0.7143 |
| 0.0146 | 13.0 | 91 | 0.9346 | 0.7143 |
| 0.011 | 14.0 | 98 | 0.9804 | 0.7143 |
| 0.0098 | 15.0 | 105 | 1.0136 | 0.7143 |
| 0.0086 | 16.0 | 112 | 1.0424 | 0.7143 |
| 0.0089 | 17.0 | 119 | 1.0736 | 0.7143 |
| 0.0068 | 18.0 | 126 | 1.0808 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,479 |
SetFit/distilbert-base-uncased__sst2__train-16-3 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7887
- Accuracy: 0.6458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6928 | 1.0 | 7 | 0.6973 | 0.4286 |
| 0.675 | 2.0 | 14 | 0.7001 | 0.4286 |
| 0.6513 | 3.0 | 21 | 0.6959 | 0.4286 |
| 0.5702 | 4.0 | 28 | 0.6993 | 0.4286 |
| 0.5389 | 5.0 | 35 | 0.6020 | 0.7143 |
| 0.3386 | 6.0 | 42 | 0.5326 | 0.5714 |
| 0.2596 | 7.0 | 49 | 0.4943 | 0.7143 |
| 0.1633 | 8.0 | 56 | 0.3589 | 0.8571 |
| 0.1086 | 9.0 | 63 | 0.2924 | 0.8571 |
| 0.0641 | 10.0 | 70 | 0.2687 | 0.8571 |
| 0.0409 | 11.0 | 77 | 0.2202 | 0.8571 |
| 0.0181 | 12.0 | 84 | 0.2445 | 0.8571 |
| 0.0141 | 13.0 | 91 | 0.2885 | 0.8571 |
| 0.0108 | 14.0 | 98 | 0.3069 | 0.8571 |
| 0.009 | 15.0 | 105 | 0.3006 | 0.8571 |
| 0.0084 | 16.0 | 112 | 0.2834 | 0.8571 |
| 0.0088 | 17.0 | 119 | 0.2736 | 0.8571 |
| 0.0062 | 18.0 | 126 | 0.2579 | 0.8571 |
| 0.0058 | 19.0 | 133 | 0.2609 | 0.8571 |
| 0.0057 | 20.0 | 140 | 0.2563 | 0.8571 |
| 0.0049 | 21.0 | 147 | 0.2582 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,665 |
SetFit/distilbert-base-uncased__sst2__train-16-7 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6952
- Accuracy: 0.5025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6949 | 1.0 | 7 | 0.7252 | 0.2857 |
| 0.6678 | 2.0 | 14 | 0.7550 | 0.2857 |
| 0.6299 | 3.0 | 21 | 0.8004 | 0.2857 |
| 0.5596 | 4.0 | 28 | 0.8508 | 0.2857 |
| 0.5667 | 5.0 | 35 | 0.8464 | 0.2857 |
| 0.367 | 6.0 | 42 | 0.8515 | 0.2857 |
| 0.2706 | 7.0 | 49 | 0.9574 | 0.2857 |
| 0.2163 | 8.0 | 56 | 0.9710 | 0.4286 |
| 0.1024 | 9.0 | 63 | 1.1607 | 0.1429 |
| 0.1046 | 10.0 | 70 | 1.3779 | 0.1429 |
| 0.0483 | 11.0 | 77 | 1.4876 | 0.1429 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,045 |
SetFit/distilbert-base-uncased__sst2__train-16-8 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6895
- Accuracy: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6899 | 1.0 | 7 | 0.7055 | 0.2857 |
| 0.6793 | 2.0 | 14 | 0.7205 | 0.2857 |
| 0.6291 | 3.0 | 21 | 0.7460 | 0.2857 |
| 0.5659 | 4.0 | 28 | 0.8041 | 0.2857 |
| 0.5607 | 5.0 | 35 | 0.7785 | 0.4286 |
| 0.3349 | 6.0 | 42 | 0.8163 | 0.4286 |
| 0.2436 | 7.0 | 49 | 0.9101 | 0.2857 |
| 0.1734 | 8.0 | 56 | 0.8632 | 0.5714 |
| 0.1122 | 9.0 | 63 | 0.9851 | 0.5714 |
| 0.0661 | 10.0 | 70 | 1.0835 | 0.5714 |
| 0.0407 | 11.0 | 77 | 1.1656 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,045 |
SetFit/distilbert-base-uncased__sst2__train-16-9 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.5157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6868 | 1.0 | 7 | 0.7121 | 0.1429 |
| 0.6755 | 2.0 | 14 | 0.7234 | 0.1429 |
| 0.6389 | 3.0 | 21 | 0.7384 | 0.2857 |
| 0.5575 | 4.0 | 28 | 0.7884 | 0.2857 |
| 0.4972 | 5.0 | 35 | 0.7767 | 0.4286 |
| 0.2821 | 6.0 | 42 | 0.8275 | 0.4286 |
| 0.1859 | 7.0 | 49 | 0.9283 | 0.2857 |
| 0.1388 | 8.0 | 56 | 0.9384 | 0.4286 |
| 0.078 | 9.0 | 63 | 1.1973 | 0.4286 |
| 0.0462 | 10.0 | 70 | 1.4016 | 0.4286 |
| 0.0319 | 11.0 | 77 | 1.4087 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,045 |
SetFit/distilbert-base-uncased__sst2__train-32-2 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4805
- Accuracy: 0.7699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7124 | 1.0 | 13 | 0.6882 | 0.5385 |
| 0.6502 | 2.0 | 26 | 0.6715 | 0.5385 |
| 0.6001 | 3.0 | 39 | 0.6342 | 0.6154 |
| 0.455 | 4.0 | 52 | 0.5713 | 0.7692 |
| 0.2605 | 5.0 | 65 | 0.5562 | 0.7692 |
| 0.1258 | 6.0 | 78 | 0.6799 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.8096 | 0.7692 |
| 0.0175 | 8.0 | 104 | 0.9281 | 0.6923 |
| 0.0106 | 9.0 | 117 | 0.9826 | 0.6923 |
| 0.0077 | 10.0 | 130 | 1.0254 | 0.7692 |
| 0.0056 | 11.0 | 143 | 1.0667 | 0.7692 |
| 0.0042 | 12.0 | 156 | 1.1003 | 0.7692 |
| 0.0036 | 13.0 | 169 | 1.1299 | 0.7692 |
| 0.0034 | 14.0 | 182 | 1.1623 | 0.6923 |
| 0.003 | 15.0 | 195 | 1.1938 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,293 |
SetFit/distilbert-base-uncased__sst2__train-32-5 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6248
- Accuracy: 0.6826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7136 | 1.0 | 13 | 0.6850 | 0.5385 |
| 0.6496 | 2.0 | 26 | 0.6670 | 0.6154 |
| 0.5895 | 3.0 | 39 | 0.6464 | 0.7692 |
| 0.4271 | 4.0 | 52 | 0.6478 | 0.7692 |
| 0.2182 | 5.0 | 65 | 0.6809 | 0.6923 |
| 0.103 | 6.0 | 78 | 0.9119 | 0.6923 |
| 0.0326 | 7.0 | 91 | 1.0718 | 0.6923 |
| 0.0154 | 8.0 | 104 | 1.0721 | 0.7692 |
| 0.0087 | 9.0 | 117 | 1.1416 | 0.7692 |
| 0.0067 | 10.0 | 130 | 1.2088 | 0.7692 |
| 0.005 | 11.0 | 143 | 1.2656 | 0.7692 |
| 0.0037 | 12.0 | 156 | 1.3104 | 0.7692 |
| 0.0032 | 13.0 | 169 | 1.3428 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,169 |
SetFit/distilbert-base-uncased__sst2__train-32-6 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5072
- Accuracy: 0.7650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7057 | 1.0 | 13 | 0.6704 | 0.6923 |
| 0.6489 | 2.0 | 26 | 0.6228 | 0.8462 |
| 0.5475 | 3.0 | 39 | 0.5079 | 0.8462 |
| 0.4014 | 4.0 | 52 | 0.4203 | 0.8462 |
| 0.1923 | 5.0 | 65 | 0.3872 | 0.8462 |
| 0.1014 | 6.0 | 78 | 0.4909 | 0.8462 |
| 0.0349 | 7.0 | 91 | 0.5460 | 0.8462 |
| 0.0173 | 8.0 | 104 | 0.4867 | 0.8462 |
| 0.0098 | 9.0 | 117 | 0.5274 | 0.8462 |
| 0.0075 | 10.0 | 130 | 0.6086 | 0.8462 |
| 0.0057 | 11.0 | 143 | 0.6604 | 0.8462 |
| 0.0041 | 12.0 | 156 | 0.6904 | 0.8462 |
| 0.0037 | 13.0 | 169 | 0.7164 | 0.8462 |
| 0.0034 | 14.0 | 182 | 0.7368 | 0.8462 |
| 0.0031 | 15.0 | 195 | 0.7565 | 0.8462 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,293 |
SetFit/distilbert-base-uncased__sst2__train-8-0 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6920
- Accuracy: 0.5189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6916 | 1.0 | 3 | 0.7035 | 0.25 |
| 0.6852 | 2.0 | 6 | 0.7139 | 0.25 |
| 0.6533 | 3.0 | 9 | 0.7192 | 0.25 |
| 0.6211 | 4.0 | 12 | 0.7322 | 0.25 |
| 0.5522 | 5.0 | 15 | 0.7561 | 0.25 |
| 0.488 | 6.0 | 18 | 0.7883 | 0.25 |
| 0.48 | 7.0 | 21 | 0.8224 | 0.25 |
| 0.3948 | 8.0 | 24 | 0.8605 | 0.25 |
| 0.3478 | 9.0 | 27 | 0.8726 | 0.25 |
| 0.2723 | 10.0 | 30 | 0.8885 | 0.25 |
| 0.2174 | 11.0 | 33 | 0.8984 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
SetFit/distilbert-base-uncased__sst2__train-8-1 | [
"negative",
"positive"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6930
- Accuracy: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7082 | 1.0 | 3 | 0.7048 | 0.25 |
| 0.6761 | 2.0 | 6 | 0.7249 | 0.25 |
| 0.6653 | 3.0 | 9 | 0.7423 | 0.25 |
| 0.6212 | 4.0 | 12 | 0.7727 | 0.25 |
| 0.5932 | 5.0 | 15 | 0.8098 | 0.25 |
| 0.5427 | 6.0 | 18 | 0.8496 | 0.25 |
| 0.5146 | 7.0 | 21 | 0.8992 | 0.25 |
| 0.4356 | 8.0 | 24 | 0.9494 | 0.25 |
| 0.4275 | 9.0 | 27 | 0.9694 | 0.25 |
| 0.3351 | 10.0 | 30 | 0.9968 | 0.25 |
| 0.2812 | 11.0 | 33 | 1.0056 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 2,043 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.