modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
diegozs97/finetuned-chemprot-seed-0-60k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-0-700k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-0k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-1500k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-1800k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-200k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-20k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-400k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-60k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-1-700k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-0k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-1000k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-100k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-1500k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-1800k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-2000k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-200k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-20k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-400k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-60k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-2-700k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-1000k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-1500k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-1800k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-200k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-20k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-400k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-60k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-3-700k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-0k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-1000k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-100k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-1500k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-1800k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-2000k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-20k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-60k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-chemprot-seed-4-700k | [
"CPR:3",
"CPR:4",
"CPR:5",
"CPR:6",
"CPR:9",
"false"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-0-0k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-0-1500k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-0-1800k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-0-200k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-0-700k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-1-0k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-1-1500k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-1-2000k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-1-200k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-1-20k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-0k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-100k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-1500k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-1800k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-200k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-20k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-400k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-2-700k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-3-1500k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-3-2000k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-3-200k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-3-20k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-3-60k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
diegozs97/finetuned-sciie-seed-3-700k | [
"COMPARE",
"CONJUNCTION",
"EVALUATE-FOR",
"FEATURE-OF",
"HYPONYM-OF",
"PART-OF",
"USED-FOR"
] | Entry not found | 15 |
ds198799/autonlp-predict_ROI_1-29797730 | [
"1.0",
"2.0",
"3.0"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ds198799/autonlp-data-predict_ROI_1
co2_eq_emissions: 2.2439127664461718
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797730
- CO2 Emissions (in grams): 2.2439127664461718
## Validation Metrics
- Loss: 0.6314184069633484
- Accuracy: 0.7596774193548387
- Macro F1: 0.4740565300039588
- Micro F1: 0.7596774193548386
- Weighted F1: 0.7371623804622154
- Macro Precision: 0.6747804619412134
- Micro Precision: 0.7596774193548387
- Weighted Precision: 0.7496542175358931
- Macro Recall: 0.47743727441146655
- Micro Recall: 0.7596774193548387
- Weighted Recall: 0.7596774193548387
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797730
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,387 |
edwardgowsmith/pt-finegrained-zero-shot | null | Entry not found | 15 |
edwardgowsmith/xlnet-base-cased-train-from-dev-best | null | Entry not found | 15 |
emfa/l-lectra-danish-finetuned-hatespeech | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: l-lectra-danish-finetuned-hatespeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# l-lectra-danish-finetuned-hatespeech
This model is for a university project and is uploaded for sharing between students. It is training on a danish hate speech labeled training set. Feel free to use it, but as of now, we don't promise any good results ;-)
This model is a fine-tuned version of [Maltehb/-l-ctra-danish-electra-small-uncased](https://huggingface.co/Maltehb/-l-ctra-danish-electra-small-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 315 | 0.2561 |
| 0.291 | 2.0 | 630 | 0.2491 |
| 0.291 | 3.0 | 945 | 0.2434 |
| 0.2089 | 4.0 | 1260 | 0.2608 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,715 |
evandrodiniz/autonlp-api-boamente-417310788 | [
"negative",
"positive"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- evandrodiniz/autonlp-data-api-boamente
co2_eq_emissions: 6.826886567147602
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310788
- CO2 Emissions (in grams): 6.826886567147602
## Validation Metrics
- Loss: 0.20949310064315796
- Accuracy: 0.9578392621870883
- Precision: 0.9476190476190476
- Recall: 0.9045454545454545
- AUC: 0.9714032720526227
- F1: 0.9255813953488372
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/evandrodiniz/autonlp-api-boamente-417310788
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("evandrodiniz/autonlp-api-boamente-417310788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("evandrodiniz/autonlp-api-boamente-417310788", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,187 |
evandrodiniz/autonlp-api-boamente-417310793 | [
"negative",
"positive"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- evandrodiniz/autonlp-data-api-boamente
co2_eq_emissions: 9.446754273734577
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310793
- CO2 Emissions (in grams): 9.446754273734577
## Validation Metrics
- Loss: 0.25755178928375244
- Accuracy: 0.9407114624505929
- Precision: 0.8600823045267489
- Recall: 0.95
- AUC: 0.9732501264968797
- F1: 0.9028077753779697
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/evandrodiniz/autonlp-api-boamente-417310793
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("evandrodiniz/autonlp-api-boamente-417310793", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("evandrodiniz/autonlp-api-boamente-417310793", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,173 |
fadhilarkan/distilbert-base-uncased-finetuned-cola-3 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Matthews Correlation: 1.0
Label 0 : "AIMX"
Label 1 : "OWNX"
Label 2 : "CONT"
Label 3 : "BASE"
Label 4 : "MISC"
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 192 | 0.0060 | 1.0 |
| No log | 2.0 | 384 | 0.0019 | 1.0 |
| 0.0826 | 3.0 | 576 | 0.0010 | 1.0 |
| 0.0826 | 4.0 | 768 | 0.0006 | 1.0 |
| 0.0826 | 5.0 | 960 | 0.0005 | 1.0 |
| 0.001 | 6.0 | 1152 | 0.0004 | 1.0 |
| 0.001 | 7.0 | 1344 | 0.0003 | 1.0 |
| 0.0005 | 8.0 | 1536 | 0.0003 | 1.0 |
| 0.0005 | 9.0 | 1728 | 0.0002 | 1.0 |
| 0.0005 | 10.0 | 1920 | 0.0002 | 1.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 2,198 |
harish/EN-AStitchTask1A-XLNet-TrueFalse-0-FewShot-0-BEST | null | Entry not found | 15 |
harish/PT-UP-xlmR-FewShot-FalseTrue-0_0_BEST | null | Entry not found | 15 |
iyaja/codebert-llvm-ic-v0 | [
"LABEL_0"
] | Entry not found | 15 |
ji-xin/roberta_base-MRPC-two_stage | null | Entry not found | 15 |
jwuthri/autonlp-shipping_status_2-27366103 | [
"0",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- jwuthri/autonlp-data-shipping_status_2
co2_eq_emissions: 32.912881644048
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 27366103
- CO2 Emissions (in grams): 32.912881644048
## Validation Metrics
- Loss: 0.18175844848155975
- Accuracy: 0.9437683592110785
- Precision: 0.9416809605488851
- Recall: 0.8459167950693375
- AUC: 0.9815242330050846
- F1: 0.8912337662337663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/jwuthri/autonlp-shipping_status_2-27366103
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jwuthri/autonlp-shipping_status_2-27366103", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,179 |
k-partha/decision_bert_bio | [
"Feeling",
"Thinking"
] | Rates Twitter biographies on decision-making preference: Thinking or Feeling. Roughly corresponds to [agreeableness.](https://en.wikipedia.org/wiki/Agreeableness)
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Remember that models employ pure statistical reasoning (and may consequently make no sense sometimes.)
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402). | 699 |
lewtun/bert-base-japanese-char-v2-finetuned-amazon-jap | null | Entry not found | 15 |
lewtun/results | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251012149383893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8221 | 1.0 | 250 | 0.3106 | 0.9125 | 0.9102 |
| 0.2537 | 2.0 | 500 | 0.2147 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
| 1,736 |
maximedb/paws-x-all | [
"0",
"1"
] | Entry not found | 15 |
michaelrglass/albert-base-rci-wtq-row | null | Entry not found | 15 |
milyiyo/electra-small-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.5504
- name: F1
type: f1
value: 0.5457527808330634
- name: Precision
type: precision
value: 0.5428695841337288
- name: Recall
type: recall
value: 0.5504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-finetuned-amazon-review
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0560
- Accuracy: 0.5504
- F1: 0.5458
- Precision: 0.5429
- Recall: 0.5504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.2172 | 1.0 | 1000 | 1.1014 | 0.5216 | 0.4902 | 0.4954 | 0.5216 |
| 1.0027 | 2.0 | 2000 | 1.0388 | 0.549 | 0.5471 | 0.5494 | 0.549 |
| 0.9035 | 3.0 | 3000 | 1.0560 | 0.5504 | 0.5458 | 0.5429 | 0.5504 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 2,275 |
mofawzy/bert-arsentd-lev | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language:
- ar
datasets:
- ArSentD-LEV
tags:
- ArSentD-LEV
widget:
- text: "يهدي الله من يشاء"
- text: "الاسلوب قذر وقمامه"
---
# bert-arsentd-lev
Arabic version bert model fine tuned on ArSentD-LEV dataset
## Data
The model were fine-tuned on ~4000 sentence from twitter multiple dialect and five classes we used 3 out of 5 int the experiment.
## Results
| class | precision | recall | f1-score | Support |
|----------|-----------|--------|----------|---------|
| 0 | 0.8211 | 0.8080 | 0.8145 | 125 |
| 1 | 0.7174 | 0.7857 | 0.7500 | 84 |
| 2 | 0.6867 | 0.6404 | 0.6628 | 89 |
| Accuracy | | | 0.7517 | 298 |
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name="mofawzy/bert-arsentd-lev"
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=3)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
| 1,159 |
mrm8488/deberta-v3-small-finetuned-qnli | [
"entailment",
"not_entailment"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-small
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9150649826102873
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-small fine-tuned on QNLI
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2143
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2823 | 1.0 | 6547 | 0.2143 | 0.9151 |
| 0.1996 | 2.0 | 13094 | 0.2760 | 0.9103 |
| 0.1327 | 3.0 | 19641 | 0.3293 | 0.9169 |
| 0.0811 | 4.0 | 26188 | 0.4278 | 0.9193 |
| 0.05 | 5.0 | 32735 | 0.5110 | 0.9176 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,880 |
mvonwyl/roberta-twitter-spam-classifier | null | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-twitter-spam-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-twitter-spam-classifier
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- Micro-avg-precision: 0.8723
- Micro-avg-recall: 0.8490
- Micro-avg-f1-score: 0.8594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro-avg-precision | Micro-avg-recall | Micro-avg-f1-score |
|:-------------:|:-----:|:-----:|:---------------:|:-------------------:|:----------------:|:------------------:|
| 0.4923 | 1.0 | 2762 | 0.5676 | 0.8231 | 0.6494 | 0.6676 |
| 0.535 | 2.0 | 5524 | 0.4460 | 0.8065 | 0.8215 | 0.8132 |
| 0.5492 | 3.0 | 8286 | 0.6005 | 0.6635 | 0.5843 | 0.3906 |
| 0.5947 | 4.0 | 11048 | 0.5710 | 0.7875 | 0.7799 | 0.7835 |
| 0.4976 | 5.0 | 13810 | 0.5194 | 0.8375 | 0.7544 | 0.7800 |
| 0.5263 | 6.0 | 16572 | 0.5491 | 0.8739 | 0.7159 | 0.7475 |
| 0.4701 | 7.0 | 19334 | 0.4609 | 0.8681 | 0.7786 | 0.8069 |
| 0.4566 | 8.0 | 22096 | 0.4100 | 0.8637 | 0.8281 | 0.8430 |
| 0.4339 | 9.0 | 24858 | 0.4395 | 0.8642 | 0.8454 | 0.8540 |
| 0.3906 | 10.0 | 27620 | 0.3856 | 0.8723 | 0.8490 | 0.8594 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| 2,594 |
ntrnghia/stsb_vn | [
"LABEL_0"
] | Entry not found | 15 |
olastor/mcn-en-smm4h | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_100",
"LABEL_101",
"LABEL_102",
"LABEL_103",
"LABEL_104",
"LABEL_105",
"LABEL_106",
"LABEL_107",
"LABEL_108",
"LABEL_109",
"LABEL_11",
"LABEL_110",
"LABEL_111",
"LABEL_112",
"LABEL_113",
"LABEL_114",
"LABEL_115",
"LABEL_116",
"LABEL_... | # BERT MCN-Model using SMM4H 2017 (subtask 3) data
The model was trained using [clagator/biobert_v1.1_pubmed_nli_sts](https://huggingface.co/clagator/biobert_v1.1_pubmed_nli_sts) as a base and the smm4h dataset from 2017 from subtask 3.
## Dataset
See [here](https://github.com/olastor/medical-concept-normalization/tree/main/data/smm4h) for the scripts and datasets.
**Attribution**
Sarker, Abeed (2018), “Data and systems for medication-related text classification and concept normalization from Twitter: Insights from the Social Media Mining for Health (SMM4H)-2017 shared task”, Mendeley Data, V2, doi: 10.17632/rxwfb3tysd.2
### Test Results
- Acc: 89.44
- Acc@2: 91.84
- Acc@3: 93.20
- Acc@5: 94.32
- Acc@10: 95.04
Acc@N denotes the accuracy taking the top N predictions of the model into account, not just the first one. | 838 |
para-zhou/cunlp-bert-case-uncased | null | Entry not found | 15 |
philschmid/BERT-tweet-eval-emotion | [
"0",
"1",
"2",
"3"
] | ---
tags: autonlp
language: en
widget:
- text: "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"
datasets:
- tweet_eval
model-index:
- name: BERT-tweet-eval-emotion
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "tweeteval"
type: tweet-eval
metrics:
- name: Accuracy
type: accuracy
value: 81.00
- name: Macro F1
type: macro-f1
value: 77.37
- name: Weighted F1
type: weighted-f1
value: 80.63
---
# `BERT-tweet-eval-emotion` trained using autoNLP
- Problem type: Multi-class Classification
## Validation Metrics
- Loss: 0.5408923625946045
- Accuracy: 0.8099929627023223
- Macro F1: 0.7737195387641751
- Micro F1: 0.8099929627023222
- Weighted F1: 0.8063100677512649
- Macro Precision: 0.8083955817268176
- Micro Precision: 0.8099929627023223
- Weighted Precision: 0.8104009668394634
- Macro Recall: 0.7529197049888299
- Micro Recall: 0.8099929627023223
- Weighted Recall: 0.8099929627023223
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"}' https://api-inference.huggingface.co/models/philschmid/BERT-tweet-eval-emotion
```
Or Python API:
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/BERT-tweet-eval-emotion'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier("Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry")
``` | 1,924 |
pierreant-p/autonlp-jcvd-or-linkedin-3471039 | [
"JCVD",
"LinkedIn"
] | ---
tags: autonlp
language: fr
widget:
- text: "I love AutoNLP 🤗"
datasets:
- pierreant-p/autonlp-data-jcvd-or-linkedin
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 3471039
## Validation Metrics
- Loss: 0.6704344749450684
- Accuracy: 0.59375
- Macro F1: 0.37254901960784315
- Micro F1: 0.59375
- Weighted F1: 0.4424019607843137
- Macro Precision: 0.296875
- Micro Precision: 0.59375
- Weighted Precision: 0.3525390625
- Macro Recall: 0.5
- Micro Recall: 0.59375
- Weighted Recall: 0.59375
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/pierreant-p/autonlp-jcvd-or-linkedin-3471039
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pierreant-p/autonlp-jcvd-or-linkedin-3471039", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pierreant-p/autonlp-jcvd-or-linkedin-3471039", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,237 |
savasy/bert-turkish-uncased-qnli | null |
# Turkish QNLI Model
I fine-tuned Turkish-Bert-Model for Question-Answering problem with Turkish version of SQuAD; TQuAD
https://huggingface.co/dbmdz/bert-base-turkish-uncased
# Data: TQuAD
I used following TQuAD data set
https://github.com/TQuad/turkish-nlp-qa-dataset
I convert the dataset into transformers glue data format of QNLI by the following script
SQuAD -> QNLI
```
import argparse
import collections
import json
import numpy as np
import os
import re
import string
import sys
ff="dev-v0.1.json"
ff="train-v0.1.json"
dataset=json.load(open(ff))
i=0
for article in dataset['data']:
title= article['title']
for p in article['paragraphs']:
context= p['context']
for qa in p['qas']:
answer= qa['answers'][0]['text']
all_other_answers= list(set([e['answers'][0]['text'] for e in p['qas']]))
all_other_answers.remove(answer)
i=i+1
print(i,qa['question'].replace(";",":") , answer.replace(";",":"),"entailment", sep="\t")
for other in all_other_answers:
i=i+1
print(i,qa['question'].replace(";",":") , other.replace(";",":"),"not_entailment" ,sep="\t")
```
Under QNLI folder there are dev and test test
Training data looks like
> 613 II.Friedrich’in bilginler arasındaki en önemli şahsiyet olarak belirttiği kişi kimdir? filozof, kimyacı, astrolog ve çevirmen not_entailment
> 614 II.Friedrich’in bilginler arasındaki en önemli şahsiyet olarak belirttiği kişi kimdir? kişisel eğilimi ve özel temaslar nedeniyle not_entailment
> 615 Michael Scotus’un mesleği nedir? filozof, kimyacı, astrolog ve çevirmen entailment
> 616 Michael Scotus’un mesleği nedir? Palermo’ya not_entailment
# Training
Training the model with following environment
```
export GLUE_DIR=./glue/glue_dataTR/QNLI
export TASK_NAME=QNLI
```
```
python3 run_glue.py \
--model_type bert \
--model_name_or_path dbmdz/bert-base-turkish-uncased\
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
# Evaluation Results
==
| acc | 0.9124060613527165
| loss| 0.21582801340189717
==
> See all my model
> https://huggingface.co/savasy
| 2,285 |
seongju/klue-tc-bert-base-multilingual-cased | [
"IT과학",
"경제",
"사회",
"생활문화",
"세계",
"스포츠",
"정치"
] | ### Model information
* language : Korean
* fine tuning data : [klue-tc (a.k.a. YNAT) ](https://klue-benchmark.com/tasks/66/overview/description)
* License : CC-BY-SA 4.0
* Base model : [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
* input : news headline
* output : topic
----
### Train information
* train_runtime: 1477.3876
* train_steps_per_second: 2.416
* train_loss: 0.3722160959110207
* epoch: 5.0
----
### How to use
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained (
"seongju/klue-tc-bert-base-multilingual-cased"
)
model = AutoModelForSequenceClassification.from_pretrained (
"seongju/klue-tc-bert-base-multilingual-cased"
)
mapping = {0: 'IT과학', 1: '경제', 2: '사회',
3: '생활문화', 4: '세계', 5: '스포츠', 6: '정치'}
inputs = tokenizer(
"백신 회피 가능성? 남미에서 새로운 변이 바이러스 급속 확산 ",
padding=True, truncation=True, max_length=128, return_tensors="pt"
)
outputs = model(**inputs)
probs = outputs[0].softmax(1)
output = mapping[probs.argmax().item()]
``` | 1,079 |
srosy/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.939
- name: F1
type: f1
value: 0.9391566069722169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.939
- F1: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4977 | 1.0 | 1000 | 0.1919 | 0.9255 | 0.9253 |
| 0.1545 | 2.0 | 2000 | 0.1582 | 0.939 | 0.9392 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,804 |
staceythompson/autonlp-myclassification-fortext-16332728 | [
"Negative",
"Positive",
"Price",
"WhoIsThis"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- staceythompson/autonlp-data-myclassification-fortext
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 16332728
## Validation Metrics
- Loss: 0.08077391237020493
- Accuracy: 0.9846153846153847
- Macro F1: 0.9900793650793651
- Micro F1: 0.9846153846153847
- Weighted F1: 0.9846153846153847
- Macro Precision: 0.9900793650793651
- Micro Precision: 0.9846153846153847
- Weighted Precision: 0.9846153846153847
- Macro Recall: 0.9900793650793651
- Micro Recall: 0.9846153846153847
- Weighted Recall: 0.9846153846153847
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/staceythompson/autonlp-myclassification-fortext-16332728
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,372 |
victen/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236951195245434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2265
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8243 | 1.0 | 250 | 0.3199 | 0.906 | 0.9025 |
| 0.2484 | 2.0 | 500 | 0.2265 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
vidhur2k/mBERT-Hindi-Mono | null | Entry not found | 15 |
vladenisov/sports-antihate | null | Entry not found | 15 |
w11wo/indonesian-roberta-base-indonli | [
"contradiction",
"entailment",
"neutral"
] | ---
language: id
tags:
- indonesian-roberta-base-indonli
license: mit
datasets:
- indonli
widget:
- text: "Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih."
---
## Indonesian RoBERTa Base IndoNLI
Indonesian RoBERTa Base IndoNLI is a natural language inference (NLI) model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`IndoNLI`](https://github.com/ir-nlp-csui/indonli)'s dataset consisting of Indonesian Wikipedia, news, and Web articles [1].
After training, the model achieved an evaluation/dev accuracy of 77.06%. On the benchmark `test_lay` subset, the model achieved an accuracy of 74.24% and on the benchmark `test_expert` subset, the model achieved an accuracy of 61.66%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| --------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-indonli` | 124M | RoBERTa Base | `IndoNLI` |
## Evaluation Results
The model was trained for 5 epochs, with a batch size of 16, a learning rate of 2e-5, a weight decay of 0.1, and a warmup ratio of 0.2, with linear annealing to 0. The best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy |
| ----- | ------------- | --------------- | -------- |
| 1 | 0.989200 | 0.691663 | 0.731452 |
| 2 | 0.673000 | 0.621913 | 0.766045 |
| 3 | 0.449900 | 0.662543 | 0.770596 |
| 4 | 0.293600 | 0.777059 | 0.768320 |
| 5 | 0.194200 | 0.948068 | 0.764224 |
## How to Use
### As NLI Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/indonesian-roberta-base-indonli"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `IndoNLI` dataset that may be carried over into the results of this model.
## References
[1] Mahendra, R., Aji, A. F., Louvan, S., Rahman, F., & Vania, C. (2021, November). [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://arxiv.org/abs/2110.14566). _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
## Author
Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
| 3,081 |
yoshitomo-matsubara/bert-base-uncased-mrpc | null | ---
language: en
tags:
- bert
- mrpc
- glue
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
| 831 |
yoshitomo-matsubara/bert-base-uncased-stsb | [
"LABEL_0"
] | ---
language: en
tags:
- bert
- stsb
- glue
- torchdistill
license: apache-2.0
datasets:
- stsb
metrics:
- pearson correlation
- spearman correlation
---
`bert-base-uncased` fine-tuned on STS-B dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/mse/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
| 862 |
zhuqing/roberta-base-uncased-AutoModelWithLMHeadnetmums-classification | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-unclean | null | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.