modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
anahitapld/electra-base-dbd | null | ---
license: apache-2.0
---
| 28 |
tbasic5/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.925022224520608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8521 | 1.0 | 250 | 0.3164 | 0.907 | 0.9038 |
| 0.2549 | 2.0 | 500 | 0.2222 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,803 |
mhaegeman/autotrain-country-recognition-1059336697 | [
"Austria",
"Belgium",
"Denmark",
"Finland",
"France",
"Germany",
"Israel",
"Italy",
"Netherlands",
"Norway",
"Poland",
"Portugal",
"Saudi Arabia",
"South Africa",
"Spain",
"Sweden",
"Switzerland",
"Turkey",
"United Arab Emirates",
"United Kingdom",
"United States"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- mhaegeman/autotrain-data-country-recognition
co2_eq_emissions: 0.02952188223491361
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1059336697
- CO2 Emissions (in grams): 0.02952188223491361
## Validation Metrics
- Loss: 0.06108148396015167
- Accuracy: 0.9879569162920872
- Macro F1: 0.9765004449554612
- Micro F1: 0.9879569162920872
- Weighted F1: 0.9879450113590053
- Macro Precision: 0.9784321161207384
- Micro Precision: 0.9879569162920872
- Weighted Precision: 0.9880404765946114
- Macro Recall: 0.9748417542427885
- Micro Recall: 0.9879569162920872
- Weighted Recall: 0.9879569162920872
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mhaegeman/autotrain-country-recognition-1059336697
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mhaegeman/autotrain-country-recognition-1059336697", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mhaegeman/autotrain-country-recognition-1059336697", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,443 |
Pro0100Hy6/test_trainer | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.6375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7753 | 1.0 | 400 | 0.7773 | 0.6375 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,323 |
Vinz9899/dumy-model | null | Entry not found | 15 |
PGT/old_pretrained-transformer-20epochs | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | Entry not found | 15 |
dee4hf/autotrain-deephate2-1093539673 | [
"Geopolitical",
"Personal",
"Political",
"Religious"
] | ---
tags: autotrain
language: bn
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dee4hf/autotrain-data-deephate2
co2_eq_emissions: 7.663051290039914
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1093539673
- CO2 Emissions (in grams): 7.663051290039914
## Validation Metrics
- Loss: 0.34404119849205017
- Accuracy: 0.8843120070113936
- Macro F1: 0.8771237753798016
- Micro F1: 0.8843120070113936
- Weighted F1: 0.8843498914288083
- Macro Precision: 0.8745249813256932
- Micro Precision: 0.8843120070113936
- Weighted Precision: 0.8854719661321065
- Macro Recall: 0.8812563739901838
- Micro Recall: 0.8843120070113936
- Weighted Recall: 0.8843120070113936
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dee4hf/autotrain-deephate2-1093539673
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autotrain-deephate2-1093539673", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dee4hf/autotrain-deephate2-1093539673", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,387 |
ltrctelugu/bigram | null | hello
| 6 |
Dror/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8721311475409836
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2979
- Accuracy: 0.87
- F1: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,505 |
juliensimon/distilbert-imdb-mlflow | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb-mlflow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-mlflow
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the imdb dataset.
MLflow logs are included. To visualize them, just clone the repo and run :
```
mlflow ui
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,150 |
rajpurkarlab/biobert-finetuned-change-classification | null | Entry not found | 15 |
leokai/distilbert-base-uncased-finetuned-wikiandmark_epoch20 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-wikiandmark_epoch20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wikiandmark_epoch20
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0561
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0224 | 1.0 | 1859 | 0.0277 | 0.9919 |
| 0.0103 | 2.0 | 3718 | 0.0298 | 0.9925 |
| 0.0047 | 3.0 | 5577 | 0.0429 | 0.9924 |
| 0.0038 | 4.0 | 7436 | 0.0569 | 0.9922 |
| 0.0019 | 5.0 | 9295 | 0.0554 | 0.9936 |
| 0.0028 | 6.0 | 11154 | 0.0575 | 0.9928 |
| 0.002 | 7.0 | 13013 | 0.0544 | 0.9926 |
| 0.0017 | 8.0 | 14872 | 0.0553 | 0.9935 |
| 0.001 | 9.0 | 16731 | 0.0498 | 0.9924 |
| 0.0001 | 10.0 | 18590 | 0.0398 | 0.9934 |
| 0.0 | 11.0 | 20449 | 0.0617 | 0.9935 |
| 0.0002 | 12.0 | 22308 | 0.0561 | 0.9944 |
| 0.0002 | 13.0 | 24167 | 0.0755 | 0.9934 |
| 0.0 | 14.0 | 26026 | 0.0592 | 0.9941 |
| 0.0 | 15.0 | 27885 | 0.0572 | 0.9939 |
| 0.0 | 16.0 | 29744 | 0.0563 | 0.9941 |
| 0.0 | 17.0 | 31603 | 0.0587 | 0.9936 |
| 0.0005 | 18.0 | 33462 | 0.0673 | 0.9937 |
| 0.0 | 19.0 | 35321 | 0.0651 | 0.9933 |
| 0.0 | 20.0 | 37180 | 0.0683 | 0.9936 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2,613 |
James-kc-min/L_Roberta3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: L_Roberta3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# L_Roberta3
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
- Accuracy: 0.9555
- F1: 0.9555
- Precision: 0.9555
- Recall: 0.9555
- C Report: precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.97 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
- C Matrix: None
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | C Report | C Matrix |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|
| 0.2674 | 1.0 | 329 | 0.2436 | 0.9389 | 0.9389 | 0.9389 | 0.9389 | precision recall f1-score support
0 0.94 0.95 0.95 876
1 0.94 0.92 0.93 696
accuracy 0.94 1572
macro avg 0.94 0.94 0.94 1572
weighted avg 0.94 0.94 0.94 1572
| None |
| 0.1377 | 2.0 | 658 | 0.1506 | 0.9408 | 0.9408 | 0.9408 | 0.9408 | precision recall f1-score support
0 0.97 0.92 0.95 876
1 0.91 0.96 0.94 696
accuracy 0.94 1572
macro avg 0.94 0.94 0.94 1572
weighted avg 0.94 0.94 0.94 1572
| None |
| 0.0898 | 3.0 | 987 | 0.1491 | 0.9548 | 0.9548 | 0.9548 | 0.9548 | precision recall f1-score support
0 0.96 0.96 0.96 876
1 0.95 0.95 0.95 696
accuracy 0.95 1572
macro avg 0.95 0.95 0.95 1572
weighted avg 0.95 0.95 0.95 1572
| None |
| 0.0543 | 4.0 | 1316 | 0.1831 | 0.9561 | 0.9561 | 0.9561 | 0.9561 | precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.96 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
| None |
| 0.0394 | 5.0 | 1645 | 0.2095 | 0.9555 | 0.9555 | 0.9555 | 0.9555 | precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.97 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
| None |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
| 4,623 |
anneke/finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1289
- Accuracy: 0.977
- F1: 0.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,327 |
johnheo1128/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5477951635989807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8081
- Matthews Correlation: 0.5478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5222 | 1.0 | 535 | 0.5270 | 0.4182 |
| 0.3451 | 2.0 | 1070 | 0.5017 | 0.4810 |
| 0.2309 | 3.0 | 1605 | 0.5983 | 0.5314 |
| 0.179 | 4.0 | 2140 | 0.7488 | 0.5291 |
| 0.1328 | 5.0 | 2675 | 0.8081 | 0.5478 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,999 |
CShorten/ArXiv-Cross-Encoder-Title-Abstracts | null | Entry not found | 15 |
tattle-admin/july22-xlmtwtroberta-da-multi | null | Entry not found | 15 |
SIMAS-UN/blaming_locals | null | Entry not found | 15 |
Yuetian/bert-base-uncased-finetuned-plutchik-emotion | [
"anger",
"anticipation",
"disgust",
"fear",
"joy",
"sadness",
"surprise",
"trust"
] | ---
license: mit
---
| 21 |
09panesara/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5406394412669151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7580
- Matthews Correlation: 0.5406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5307 | 1.0 | 535 | 0.5094 | 0.4152 |
| 0.3545 | 2.0 | 1070 | 0.5230 | 0.4940 |
| 0.2371 | 3.0 | 1605 | 0.6412 | 0.5087 |
| 0.1777 | 4.0 | 2140 | 0.7580 | 0.5406 |
| 0.1288 | 5.0 | 2675 | 0.8494 | 0.5396 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,000 |
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,130 |
Ahren09/distilbert-base-uncased-finetuned-cola | null | Entry not found | 15 |
Alireza1044/albert-base-v2-qqp | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model_index:
- name: qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metric:
name: F1
type: f1
value: 0.8722569490623753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3695
- Accuracy: 0.9050
- F1: 0.8723
- Combined Score: 0.8886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| 1,397 |
Alireza1044/bert_classification_lm | null | A simple model trained on dialogues of characters in NBC series, `The Office`. The model can do a binary classification between `Michael Scott` and `Dwight Shrute`'s dialogues.
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-c3ow" colspan="2">Label Definitions</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-c3ow">Label 0</td>
<td class="tg-c3ow">Michael</td>
</tr>
<tr>
<td class="tg-c3ow">Label 1</td>
<td class="tg-c3ow">Dwight</td>
</tr>
</tbody>
</table> | 990 |
Amalq/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5335074704896392
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7570
- Matthews Correlation: 0.5335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5315 | 1.0 | 535 | 0.5214 | 0.4009 |
| 0.354 | 2.0 | 1070 | 0.5275 | 0.4857 |
| 0.2396 | 3.0 | 1605 | 0.6610 | 0.4901 |
| 0.1825 | 4.0 | 2140 | 0.7570 | 0.5335 |
| 0.1271 | 5.0 | 2675 | 0.8923 | 0.5074 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 2,000 |
Anamika/autonlp-fa-473312409 | [
"Claim",
"Concluding Statement",
"Counterclaim",
"Evidence",
"Lead",
"Position",
"Rebuttal"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anamika/autonlp-data-fa
co2_eq_emissions: 25.128735714898614
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 473312409
- CO2 Emissions (in grams): 25.128735714898614
## Validation Metrics
- Loss: 0.6010786890983582
- Accuracy: 0.7990650945370823
- Macro F1: 0.7429662929144928
- Micro F1: 0.7990650945370823
- Weighted F1: 0.7977660363770382
- Macro Precision: 0.7744390888231261
- Micro Precision: 0.7990650945370823
- Weighted Precision: 0.800444194278352
- Macro Recall: 0.7198278524814119
- Micro Recall: 0.7990650945370823
- Weighted Recall: 0.7990650945370823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,341 |
AnonARR/qqp-bert | [
"duplicate",
"not_duplicate"
] | Entry not found | 15 |
AnonymousSub/cline-s10-AR | null | Entry not found | 15 |
AnonymousSub/cline_wikiqa | null | Entry not found | 15 |
AnonymousSub/consert-s10-SR | null | Entry not found | 15 |
AnonymousSub/declutr-emanuals-s10-AR | null | Entry not found | 15 |
AnonymousSub/declutr-emanuals-s10-SR | null | Entry not found | 15 |
AnonymousSub/declutr-model_wikiqa | null | Entry not found | 15 |
AnonymousSub/declutr-s10-AR | null | Entry not found | 15 |
AnonymousSub/declutr-s10-SR | null | Entry not found | 15 |
AnonymousSub/dummy_1 | null | Entry not found | 15 |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa | null | Entry not found | 15 |
Ateeb/FullEmotionDetector | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_3",
"LABEL_4",
"LABEL_5",
... | Entry not found | 15 |
CenIA/albert-base-spanish-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
CenIA/albert-large-spanish-finetuned-xnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
CenIA/albert-xlarge-spanish-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
CenIA/albert-xlarge-spanish-finetuned-pawsx | null | Entry not found | 15 |
CenIA/albert-xlarge-spanish-finetuned-xnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
CenIA/albert-xxlarge-spanish-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
CenIA/bert-base-spanish-wwm-cased-finetuned-pawsx | null | Entry not found | 15 |
CenIA/distillbert-base-spanish-uncased-finetuned-pawsx | null | Entry not found | 15 |
Cheatham/xlm-roberta-base-finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-d1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-d12 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned3 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
CleveGreen/FieldClassifier_v2_gpt | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"... | Entry not found | 15 |
CodeNinja1126/test-model | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_23",
"LABEL_24",
"LABEL_25",
"LABEL_26",
"LABEL_27",
"LABEL_28",
"LABEL_29",... | Entry not found | 15 |
DSI/TweetBasedSA | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
EhsanAghazadeh/bert-based-uncased-sst2-e1 | [
"negative",
"positive"
] | Entry not found | 15 |
EhsanAghazadeh/bert-based-uncased-sst2-e6 | [
"negative",
"positive"
] | Entry not found | 15 |
EhsanAghazadeh/electra-base-avg-2e-5-lcc | null | Entry not found | 15 |
EhsanAghazadeh/electra-large-lcc-2e-5-42 | null | Entry not found | 15 |
Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi | null | Entry not found | 15 |
HackMIT/double-agent | null | Entry not found | 15 |
Hate-speech-CNERG/deoffxlmr-mono-tamil | [
"Not_offensive",
"Not_in_intended_language",
"Off_target_other",
"Off_target_group",
"Profanity",
"Off_target_ind"
] | ---
language: ta
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ | 2,503 |
Hormigo/roberta-base-bne-finetuned-amazon_reviews_multi | null | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model_index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metric:
name: Accuracy
type: accuracy
value: 0.9335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2275
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1909 | 1.0 | 1250 | 0.1717 | 0.9333 |
| 0.0932 | 2.0 | 2500 | 0.2275 | 0.9335 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,750 |
Huffon/qnli | null | Entry not found | 15 |
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog | [
"chitchat_ask_bye",
"chitchat_ask_hi",
"chitchat_ask_hi_de",
"chitchat_ask_hi_en",
"chitchat_ask_hi_fr",
"chitchat_ask_hoe_gaat_het",
"chitchat_ask_name",
"chitchat_ask_thanks",
"faq_ask_aantal_gevaccineerd",
"faq_ask_aantal_gevaccineerd_wereldwijd",
"faq_ask_afspraak_afzeggen",
"faq_ask_afspr... | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog
This model is a fine-tuned version of [outputDA/checkpoint-7710](https://huggingface.co/outputDA/checkpoint-7710) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5025
- Accuracy: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.9925 | 1.0 | 1320 | 3.0954 | 0.4223 |
| 2.5041 | 2.0 | 2640 | 1.9762 | 0.6563 |
| 1.8061 | 3.0 | 3960 | 1.3196 | 0.7952 |
| 1.0694 | 4.0 | 5280 | 0.9304 | 0.8510 |
| 0.6479 | 5.0 | 6600 | 0.6875 | 0.8821 |
| 0.4408 | 6.0 | 7920 | 0.5692 | 0.8976 |
| 0.2542 | 7.0 | 9240 | 0.5291 | 0.8949 |
| 0.1709 | 8.0 | 10560 | 0.5038 | 0.9059 |
| 0.1181 | 9.0 | 11880 | 0.4885 | 0.9049 |
| 0.0878 | 10.0 | 13200 | 0.4900 | 0.9049 |
| 0.0702 | 11.0 | 14520 | 0.4930 | 0.9086 |
| 0.0528 | 12.0 | 15840 | 0.4987 | 0.9113 |
| 0.0406 | 13.0 | 17160 | 0.5009 | 0.9113 |
| 0.0321 | 14.0 | 18480 | 0.5017 | 0.9104 |
| 0.0308 | 15.0 | 19800 | 0.5025 | 0.9077 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 2,282 |
Katsiaryna/distilbert-base-uncased-finetuned | [
"LABEL_0"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8229
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.7709 | 0.74 |
| No log | 2.0 | 14 | 0.7048 | 0.72 |
| No log | 3.0 | 21 | 0.8728 | 0.46 |
| No log | 4.0 | 28 | 0.7849 | 0.64 |
| No log | 5.0 | 35 | 0.8229 | 0.54 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,617 |
Katsiaryna/qnli-electra-base-finetuned_9th_auc_ce | [
"LABEL_0"
] | Entry not found | 15 |
Katsiaryna/qnli-electra-base-finetuned_9th_auc_ce_diff | [
"LABEL_0"
] | Entry not found | 15 |
Katsiaryna/qnli-electra-base-finetuned_auc | [
"LABEL_0"
] | Entry not found | 15 |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc | [
"LABEL_0"
] | Entry not found | 15 |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3 | [
"LABEL_0"
] | Entry not found | 15 |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_161221-top3 | [
"LABEL_0"
] | Entry not found | 15 |
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_40000-top3-BCE | [
"LABEL_0"
] | Entry not found | 15 |
Kien/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5232819075279987
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5327
- Matthews Correlation: 0.5233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5314 | 1.0 | 535 | 0.4955 | 0.4270 |
| 0.3545 | 2.0 | 1070 | 0.5327 | 0.5233 |
| 0.2418 | 3.0 | 1605 | 0.6180 | 0.5132 |
| 0.1722 | 4.0 | 2140 | 0.7344 | 0.5158 |
| 0.1243 | 5.0 | 2675 | 0.8581 | 0.5196 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 2,000 |
Kumicho/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5258663312307151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7758
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,704 |
LilaBoualili/bert-pre-doc | null | Entry not found | 15 |
LilaBoualili/electra-pre-doc | null | Entry not found | 15 |
LilaBoualili/electra-pre-pair | null | Entry not found | 15 |
LilaBoualili/electra-sim-doc | null | Entry not found | 15 |
LilaBoualili/electra-vanilla | null | At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. | 476 |
Lumos/imdb3_hga | null | Entry not found | 15 |
Lumos/yahoo1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
M-FAC/bert-mini-finetuned-qqp | null | # BERT-mini model finetuned with M-FAC
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QQP validation set:
```bash
f1 = 82.98
accuracy = 87.03
```
Mean and standard deviation for 5 runs on QQP validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 82.43 ± 0.10 | 86.45 ± 0.12 |
| M-FAC | 82.67 ± 0.23 | 86.75 ± 0.20 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 10723 \
--model_name_or_path prajjwal1/bert-mini \
--task_name qqp \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| 2,785 |
M-FAC/bert-tiny-finetuned-qnli | null | # BERT-tiny model finetuned with M-FAC
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QNLI validation set:
```bash
accuracy = 81.54
```
Mean and standard deviation for 5 runs on QNLI validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 77.85 ± 0.15 |
| M-FAC | 81.17 ± 0.43 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name qnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| 2,729 |
Maelstrom77/roblclass | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
Maha/OGBV-gender-indicbert-ta-fire20_fin | null | Entry not found | 15 |
Maha/hin-trac2 | null | Entry not found | 15 |
Maunish/kgrouping-roberta-large | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-duracloud | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-mesos | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/0-GPT2SP-mule | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/00-GPT2SP-appceleratorstudio-aptanastudio | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/00-GPT2SP-appceleratorstudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
MickyMike/00-GPT2SP-aptanastudio-titanium | [
"LABEL_0"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.