modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
edwardgowsmith/xlnet-base-cased-train-from-dev-and-test-short-best | null | Entry not found | 15 |
edwardgowsmith/xlnet-base-cased-train-from-dev-short-best | null | Entry not found | 15 |
ehddnr301/bert-base-ehddnr-ynat | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model_index:
- name: bert-base-ehddnr-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metric:
name: F1
type: f1
value: 0.8720568553403009
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-ehddnr-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3587
- F1: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4398 | 0.8548 |
| No log | 2.0 | 358 | 0.3587 | 0.8721 |
| 0.3859 | 3.0 | 537 | 0.3639 | 0.8707 |
| 0.3859 | 4.0 | 716 | 0.3592 | 0.8692 |
| 0.3859 | 5.0 | 895 | 0.3646 | 0.8717 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1,757 |
ehdwns1516/klue-roberta-base-kornli | [
"entailment",
"neutral",
"contradiction"
] | # klue-roberta-base-kornli
* This model trained with Korean dataset.
* Input premise sentence and hypothesis sentence.
* You can use English, but don't expect accuracy.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
klue-roberta-base-kornli DEMO: [Ainize DEMO](https://main-klue-roberta-base-kornli-ehdwns1516.endpoint.ainize.ai/)
klue-roberta-base-kornli API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/klue-roberta-base_kornli)
## Overview
Language model: [klue/roberta-base](https://huggingface.co/klue/roberta-base)
Language: Korean
Training data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)
Eval data: [kakaobrain KorNLI](https://github.com/kakaobrain/KorNLUDatasets/tree/master/KorNLI)
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/klue-roberta-base_finetunning_ex)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/klue-roberta-base-kornli")
classifier = pipeline(
"text-classification",
model="ehdwns1516/klue-roberta-base-kornli",
return_all_scores=True,
)
premise = "your premise"
hypothesis = "your hypothesis"
result = dict()
result[0] = classifier(premise + tokenizer.sep_token + hypothesis)[0]
```
| 1,464 |
eliza-dukim/para-kqc-sim | null | Entry not found | 15 |
emekaboris/autonlp-txc-17923124 | [
"1.0",
"10.0",
"11.0",
"12.0",
"13.0",
"14.0",
"15.0",
"16.0",
"17.0",
"18.0",
"19.0",
"2.0",
"20.0",
"21.0",
"22.0",
"23.0",
"24.0",
"3.0",
"4.0",
"5.0",
"6.0",
"7.0",
"8.0",
"9.0"
] | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-txc
co2_eq_emissions: 133.57087522185148
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 17923124
- CO2 Emissions (in grams): 133.57087522185148
## Validation Metrics
- Loss: 0.2080804407596588
- Accuracy: 0.9325402190077058
- Macro F1: 0.7283811287183823
- Micro F1: 0.9325402190077058
- Weighted F1: 0.9315711955594153
- Macro Precision: 0.8106599661500661
- Micro Precision: 0.9325402190077058
- Weighted Precision: 0.9324644116921059
- Macro Recall: 0.7020515544343829
- Micro Recall: 0.9325402190077058
- Weighted Recall: 0.9325402190077058
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-txc-17923124
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-txc-17923124", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-txc-17923124", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,354 |
enelpol/poleval2021-task2 | [
"LABEL_0"
] | Entry not found | 15 |
erica/krm_sa2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
fabriceyhc/bert-base-uncased-yelp_polarity | null | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- yelp_polarity
metrics:
- accuracy
model-index:
- name: bert-base-uncased-yelp_polarity
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_polarity
type: yelp_polarity
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9516052631578947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-yelp_polarity
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yelp_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3222
- Accuracy: 0.9516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 277200
- training_steps: 2772000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8067 | 0.0 | 2000 | 0.8241 | 0.4975 |
| 0.5482 | 0.01 | 4000 | 0.3507 | 0.8591 |
| 0.3427 | 0.01 | 6000 | 0.3750 | 0.9139 |
| 0.4133 | 0.01 | 8000 | 0.5520 | 0.9016 |
| 0.4301 | 0.02 | 10000 | 0.3803 | 0.9304 |
| 0.3716 | 0.02 | 12000 | 0.4168 | 0.9337 |
| 0.4076 | 0.03 | 14000 | 0.5042 | 0.9170 |
| 0.3674 | 0.03 | 16000 | 0.4806 | 0.9268 |
| 0.3813 | 0.03 | 18000 | 0.4227 | 0.9261 |
| 0.3723 | 0.04 | 20000 | 0.3360 | 0.9418 |
| 0.3876 | 0.04 | 22000 | 0.3255 | 0.9407 |
| 0.3351 | 0.04 | 24000 | 0.3283 | 0.9404 |
| 0.34 | 0.05 | 26000 | 0.3489 | 0.9430 |
| 0.3006 | 0.05 | 28000 | 0.3302 | 0.9464 |
| 0.349 | 0.05 | 30000 | 0.3853 | 0.9375 |
| 0.3696 | 0.06 | 32000 | 0.2992 | 0.9454 |
| 0.3301 | 0.06 | 34000 | 0.3484 | 0.9464 |
| 0.3151 | 0.06 | 36000 | 0.3529 | 0.9455 |
| 0.3682 | 0.07 | 38000 | 0.3052 | 0.9420 |
| 0.3184 | 0.07 | 40000 | 0.3323 | 0.9466 |
| 0.3207 | 0.08 | 42000 | 0.3133 | 0.9532 |
| 0.3346 | 0.08 | 44000 | 0.3826 | 0.9414 |
| 0.3008 | 0.08 | 46000 | 0.3059 | 0.9484 |
| 0.3306 | 0.09 | 48000 | 0.3089 | 0.9475 |
| 0.342 | 0.09 | 50000 | 0.3611 | 0.9486 |
| 0.3424 | 0.09 | 52000 | 0.3227 | 0.9445 |
| 0.3044 | 0.1 | 54000 | 0.3130 | 0.9489 |
| 0.3278 | 0.1 | 56000 | 0.3827 | 0.9368 |
| 0.288 | 0.1 | 58000 | 0.3080 | 0.9504 |
| 0.3342 | 0.11 | 60000 | 0.3252 | 0.9471 |
| 0.3737 | 0.11 | 62000 | 0.4250 | 0.9343 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
| 3,573 |
fgaim/tiroberta-sentiment | [
"NEGATIVE",
"POSITIVE"
] | ---
language: ti
widget:
- text: "ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር"
datasets:
- TLMD
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: tiroberta-sentiment
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.828
- name: F1
type: f1
value: 0.8476527900797165
- name: Precision
type: precision
value: 0.760731319554849
- name: Recall
type: recall
value: 0.957
---
# Sentiment Analysis for Tigrinya with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/roberta-base-tigrinya) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tiroberta-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
It achieves the following results on the evaluation set:
- F1: 0.8477
- Precision: 0.7607
- Recall: 0.957
- Accuracy: 0.828
- Loss: 0.6796
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
| 1,981 |
google/tapas-tiny-finetuned-tabfact | null | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS tiny model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_tiny_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_tiny`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` | 4,867 |
harish/PT-UP-xlmR-ContextIncluded_IdiomExcluded-4_BEST | null | Entry not found | 15 |
jpabbuehl/sagemaker-distilbert-emotion | [
"anger",
"fear",
"joy",
"love",
"sadness",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1446
- Accuracy: 0.929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9345 | 1.0 | 500 | 0.2509 | 0.918 |
| 0.1855 | 2.0 | 1000 | 0.1626 | 0.928 |
| 0.1036 | 3.0 | 1500 | 0.1446 | 0.929 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,790 |
k-partha/curiosity_bert_bio | [
"Sensing",
"Intuitive"
] | Labels Twitter biographies on [Openness](https://en.wikipedia.org/wiki/Openness_to_experience), strongly related to intellectual curiosity.
Intuitive: Associated with higher intellectual curiosity
Sensing: Associated with lower intellectual curiosity
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402). | 691 |
maxidl/iML-distilbert-base-uncased-predict | null | Entry not found | 15 |
maximedb/paws-x-all-x-en | [
"0",
"1"
] | Entry not found | 15 |
milyiyo/electra-base-gen-finetuned-amazon-review | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-base-gen-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5024
- name: F1
type: f1
value: 0.5063190059782597
- name: Precision
type: precision
value: 0.5121183330982292
- name: Recall
type: recall
value: 0.5024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-gen-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8030
- Accuracy: 0.5024
- F1: 0.5063
- Precision: 0.5121
- Recall: 0.5024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.5135 | 1.0 | 1000 | 0.4886 | 0.4929 | 1.6580 | 0.5077 | 0.4886 |
| 0.4138 | 2.0 | 2000 | 0.5044 | 0.5093 | 1.7951 | 0.5183 | 0.5044 |
| 0.4244 | 3.0 | 3000 | 0.5022 | 0.5068 | 1.8108 | 0.5141 | 0.5022 |
| 0.4231 | 6.0 | 6000 | 1.7636 | 0.4972 | 0.5018 | 0.5092 | 0.4972 |
| 0.3574 | 7.0 | 7000 | 1.8030 | 0.5024 | 0.5063 | 0.5121 | 0.5024 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 2,408 |
mishig/my-awesome-model | null | # Sentiment Classification by pretraining bert-base-cased
A test repo exploring HF Model Hub by following https://huggingface.co/transformers/model_sharing.html | 161 |
mnaylor/base-bert-finetuned-mtsamples | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | # BERT Base Fine-tuned on MTSamples
This model is [BERT-base](https://huggingface.co/bert-base-uncased) fine-tuned on the MTSamples dataset, with a classification task defined in [this repo](https://github.com/socd06/medical-nlp).
| 231 |
mrm8488/distilroberta-finetuned-rotten_tomatoes-sentiment-analysis | null | Entry not found | 15 |
navsad/navid_test_bert | [
"acceptable",
"unacceptable"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: navid_test_bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5834463254140851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# navid_test_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8149
- Matthews Correlation: 0.5834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4598 | 1.0 | 1069 | 0.4919 | 0.5314 |
| 0.3228 | 2.0 | 2138 | 0.6362 | 0.5701 |
| 0.17 | 3.0 | 3207 | 0.8149 | 0.5834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| 1,788 |
neuropark/sahajBERT-NCC | [
"kolkata",
"state",
"national",
"sports",
"entertainment",
"international"
] |
---
language: bn
tags:
- collaborative
- bengali
- SequenceClassification
license: apache-2.0
datasets: IndicGlue
metrics:
- Loss
- Accuracy
- Precision
- Recall
widget:
- text: "এশিয়ায় প্রথম দৃষ্টিহীন ব্যক্তির মাউন্ট এভারেস্ট জয়|"
---
# sahajBERT News Article Classification
## Model description
[sahajBERT](https://huggingface.co/neuropark/sahajBERT) fine-tuned for news article classification using the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue).
The model is trained for classifying articles into 5 different classes:
| Label id | Label |
|:--------:|:----:|
|0 | kolkata|
|1 | state|
|2 | national|
|3 | sports|
|4 | entertainment|
|5 | international|
## Intended uses & limitations
#### How to use
You can use this model directly with a pipeline for Sequence Classification:
```python
from transformers import AlbertForSequenceClassification, TextClassificationPipeline, PreTrainedTokenizerFast
# Initialize tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NCC")
# Initialize model
model = AlbertForSequenceClassification.from_pretrained("neuropark/sahajBERT-NCC")
# Initialize pipeline
pipeline = TextClassificationPipeline(tokenizer=tokenizer, model=model)
raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me
output = pipeline(raw_text)
```
#### Limitations and bias
<!-- Provide examples of latent issues and potential remediations. -->
WIP
## Training data
The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT) at step 19519 and trained on the `sna.bn` split of [IndicGlue](https://huggingface.co/datasets/indic_glue).
## Training procedure
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
## Eval results
Loss: 0.2477145493030548
Accuracy: 0.926293408929837
Macro F1: 0.9079785326650756
Recall: 0.926293408929837
Weighted F1: 0.9266428029354202
Macro Precision: 0.9109938492260489
Micro Precision: 0.926293408929837
Weighted Precision: 0.9288535478995414
Macro Recall: 0.9069095007692186
Micro Recall: 0.926293408929837
Weighted Recall: 0.926293408929837
### BibTeX entry and citation info
Coming soon!
<!-- ```bibtex
@inproceedings{...,
year={2020}
}
``` -->
| 2,274 |
o2poi/sst2-eda-bert | null | Entry not found | 15 |
pdroberts/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,089 |
philschmid/pt-tblard-tf-allocine | [
"NEGATIVE",
"POSITIVE"
] | ---
language: fr
---
# Pytorch Fork of [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine)
A french sentiment analysis model, based on [CamemBERT](https://camembert-model.fr/), and finetuned on a large-scale dataset scraped from [Allociné.fr](http://www.allocine.fr/) user reviews.
## Results
| Validation Accuracy | Validation F1-Score | Test Accuracy | Test F1-Score |
|--------------------:| -------------------:| -------------:|--------------:|
| 97.39 | 97.36 | 97.44 | 97.34 |
The dataset and the evaluation code are available on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert).
## Usage
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("tblard/tf-allocine")
model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine")
nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
print(nlp("Alad'2 est clairement le meilleur film de l'année 2018.")) # POSITIVE
print(nlp("Juste whoaaahouuu !")) # POSITIVE
print(nlp("NUL...A...CHIER ! FIN DE TRANSMISSION.")) # NEGATIVE
print(nlp("Je m'attendais à mieux de la part de Franck Dubosc !")) # NEGATIVE
```
## Author
Théophile Blard – :email: theophile.blard@gmail.com
If you use this work (code, model or dataset), please cite as:
> Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert>
| 1,577 |
pysentimiento/robertuito-irony | [
"ironic",
"not ironic"
] | # Irony detection in Spanish
## robertuito-irony
Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with IRosVA 2019 dataset for irony detection. Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets.
The positive class marks irony, the negative class marks not irony.
## Results
Results for the four tasks evaluated in `pysentimiento`. Results are expressed as Macro F1 scores
| model | emotion | hate_speech | irony | sentiment |
|:--------------|:--------------|:--------------|:--------------|:--------------|
| robertuito | 0.560 ± 0.010 | 0.759 ± 0.007 | 0.739 ± 0.005 | 0.705 ± 0.003 |
| roberta | 0.527 ± 0.015 | 0.741 ± 0.012 | 0.721 ± 0.008 | 0.670 ± 0.006 |
| bertin | 0.524 ± 0.007 | 0.738 ± 0.007 | 0.713 ± 0.012 | 0.666 ± 0.005 |
| beto_uncased | 0.532 ± 0.012 | 0.727 ± 0.016 | 0.701 ± 0.007 | 0.651 ± 0.006 |
| beto_cased | 0.516 ± 0.012 | 0.724 ± 0.012 | 0.705 ± 0.009 | 0.662 ± 0.005 |
| mbert_uncased | 0.493 ± 0.010 | 0.718 ± 0.011 | 0.681 ± 0.010 | 0.617 ± 0.003 |
| biGRU | 0.264 ± 0.007 | 0.592 ± 0.018 | 0.631 ± 0.011 | 0.585 ± 0.011 |
Note that for Hate Speech, these are the results for Semeval 2019, Task 5 Subtask B (HS+TR+AG detection)
## Citation
If you use this model in your research, please cite pysentimiento and RoBERTuito papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{perez2021robertuito,
title={RoBERTuito: a pre-trained language model for social media text in Spanish},
author={Juan Manuel Pérez and Damián A. Furman and Laura Alonso Alemany and Franco Luque},
year={2021},
eprint={2111.09453},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2,095 |
rayschwartz/text-classification | [
"Negative",
"Positive",
"Price",
"WhoIsThis"
] | Entry not found | 15 |
robkayinto/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9195
- name: F1
type: f1
value: 0.919829815254287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2192
- Accuracy: 0.9195
- F1: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8338 | 1.0 | 250 | 0.3114 | 0.9065 | 0.9044 |
| 0.2467 | 2.0 | 500 | 0.2192 | 0.9195 | 0.9198 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,806 |
ruiqi-zhong/roberta-large-meta-tuning-test | null | Entry not found | 15 |
sgugger/glue-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: glue-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8553921568627451
- name: F1
type: f1
value: 0.897391304347826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6566
- Accuracy: 0.8554
- F1: 0.8974
- Combined Score: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
| 1,501 |
tasosk/distilbert-base-uncased-airlines | [
"NO",
"YES"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-airlines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-airlines
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tasosk/airlines dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3174
- Accuracy: 0.9288
- F1: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.2281 | 0.9164 | 0.9164 |
| No log | 2.0 | 406 | 0.2676 | 0.9164 | 0.9164 |
| 0.2314 | 3.0 | 609 | 0.3117 | 0.9217 | 0.9217 |
| 0.2314 | 4.0 | 812 | 0.3175 | 0.9270 | 0.9271 |
| 0.08 | 5.0 | 1015 | 0.3174 | 0.9288 | 0.9289 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,709 |
tiesan/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9284093135758671
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Accuracy: 0.928
- F1: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2188 | 1.0 | 250 | 0.1809 | 0.925 | 0.9246 |
| 0.1383 | 2.0 | 500 | 0.1658 | 0.928 | 0.9284 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,805 |
xysmalobia/sequence_classification | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sequence_classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8943661971830987
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sequence_classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7738
- Accuracy: 0.8529
- F1: 0.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3519 | 0.8627 | 0.9 |
| 0.4872 | 2.0 | 918 | 0.6387 | 0.8333 | 0.8893 |
| 0.2488 | 3.0 | 1377 | 0.7738 | 0.8529 | 0.8944 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
| 1,832 |
yabramuvdi/bert-sector | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_16",
"LABEL_17",
"LABEL_18",
"LABEL_19",
"LABEL_2",
"LABEL_20",
"LABEL_21",
"LABEL_22",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
debjyoti007/new_doc_classifier | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too. | 496 |
DoyyingFace/bert-tweets-semeval-unclean | null | Entry not found | 15 |
DoyyingFace/bert-tweets-semeval-clean | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-4 | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8 | null | Entry not found | 15 |
ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3516
- Accuracy: 0.414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4343 | 1.0 | 1250 | 1.3516 | 0.414 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
| 1,397 |
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8 | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12 | null | Entry not found | 15 |
Brendan/cse244b-hw2-roberta | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-concat-unclean-discriminate | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50 | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25 | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75 | null | Entry not found | 15 |
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4345
- Accuracy: 0.8321
- F1: 0.8904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 |
| No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 |
| 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 |
| 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 |
| 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5 | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6 | null | Entry not found | 15 |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100 | null | Entry not found | 15 |
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5002
- Accuracy: 0.8103
- F1: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4178 | 0.7963 | 0.8630 |
| No log | 2.0 | 390 | 0.3935 | 0.8061 | 0.8770 |
| 0.4116 | 3.0 | 585 | 0.4037 | 0.8085 | 0.8735 |
| 0.4116 | 4.0 | 780 | 0.4696 | 0.8146 | 0.8796 |
| 0.4116 | 5.0 | 975 | 0.4849 | 0.8207 | 0.8823 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,788 |
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,806 |
ali2066/finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,806 |
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.8289
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 |
| No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 |
| 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 |
| 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 |
| 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,791 |
ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0914
- Accuracy: 0.9746
- F1: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 |
| No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 |
| No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 |
| No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 |
| 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,802 |
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Accuracy: 0.8286
- F1: 0.8887
- Precision: 0.8628
- Recall: 0.9162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 |
| 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 |
| 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 |
| 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 |
| 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,993 |
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7224
- Accuracy: 0.6979
- F1: 0.4736
- Precision: 0.5074
- Recall: 0.4440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 |
| No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 |
| No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 |
| No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 |
| No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 2,011 |
ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 | [
"NEGATIVE",
"POSITIVE"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Accuracy: 0.8138
- F1: 0.8785
- Precision: 0.8489
- Recall: 0.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 |
| 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 |
| 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 |
| 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 |
| 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,993 |
ali2066/twitter-roberta-base_sentence_itr0_1e-05_all_01_03_2022-13_38_07 | null | Entry not found | 15 |
ali2066/bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7632
- Accuracy: 0.8263
- F1: 0.8871
- Precision: 0.8551
- Recall: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3986 | 0.8305 | 0.8903 | 0.8868 | 0.8938 |
| 0.4561 | 2.0 | 780 | 0.4018 | 0.8439 | 0.9009 | 0.8805 | 0.9223 |
| 0.3111 | 3.0 | 1170 | 0.4306 | 0.8354 | 0.8924 | 0.8974 | 0.8875 |
| 0.1739 | 4.0 | 1560 | 0.5499 | 0.8378 | 0.9002 | 0.8547 | 0.9509 |
| 0.1739 | 5.0 | 1950 | 0.6223 | 0.85 | 0.9052 | 0.8814 | 0.9303 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,934 |
batterydata/batteryscibert-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatterySciBERT-cased for Battery Abstract Classification
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batteryscibert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.06,
"Test accuracy": 97.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 1,464 |
batterydata/batteryonlybert-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryOnlyBERT-cased for Battery Abstract Classification
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryonlybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.33,
"Test accuracy": 97.34,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 1,468 |
batterydata/bert-base-cased-abstract | [
"battery",
"non-battery"
] | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BERT-base-cased for Battery Abstract Classification
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 15
base_LM_model = "bert-base-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.84,
"Test accuracy": 96.83,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 1,444 |
Cheatham/xlm-roberta-large-finetuned-r01 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
evs/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
Cheatham/xlm-roberta-large-finetuned-d1r01 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
emekaboris/autonlp-new_tx-607517182 | [
"1.0",
"2.0",
"3.0",
"4.0",
"5.0",
"6.0",
"7.0",
"8.0"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- emekaboris/autonlp-data-new_tx
co2_eq_emissions: 3.842950628218143
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 607517182
- CO2 Emissions (in grams): 3.842950628218143
## Validation Metrics
- Loss: 0.4033123552799225
- Accuracy: 0.8679706601466992
- Macro F1: 0.719846919916469
- Micro F1: 0.8679706601466993
- Weighted F1: 0.8622411469250695
- Macro Precision: 0.725309168791155
- Micro Precision: 0.8679706601466992
- Weighted Precision: 0.8604370906049568
- Macro Recall: 0.7216672806300003
- Micro Recall: 0.8679706601466992
- Weighted Recall: 0.8679706601466992
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-new_tx-607517182
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-new_tx-607517182", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-new_tx-607517182", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,367 |
Akash7897/distilbert-base-uncased-finetuned-sst2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9036697247706422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1793 | 1.0 | 4210 | 0.3010 | 0.9037 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
| 1,620 |
pjheslin/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254862165828515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2227
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8417 | 1.0 | 250 | 0.3260 | 0.9045 | 0.9006 |
| 0.2569 | 2.0 | 500 | 0.2227 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
DoyyingFace/bert-asian-hate-tweets-self-unclean-large | null | Entry not found | 15 |
danielmaxwell/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
daisyxie21/bert-base-uncased-8-10-0.01 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-8-10-0.01
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-8-10-0.01
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8324
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 400 | 0.8324 | 0.0 |
| 1.0904 | 2.0 | 800 | 1.3157 | 0.0 |
| 0.9461 | 3.0 | 1200 | 0.4407 | 0.0 |
| 0.9565 | 4.0 | 1600 | 2.1082 | 0.0 |
| 1.024 | 5.0 | 2000 | 0.7220 | 0.0 |
| 1.024 | 6.0 | 2400 | 0.7414 | 0.0 |
| 0.8362 | 7.0 | 2800 | 0.4442 | 0.0 |
| 0.6765 | 8.0 | 3200 | 0.5481 | 0.0 |
| 0.5902 | 9.0 | 3600 | 0.5642 | 0.0 |
| 0.5476 | 10.0 | 4000 | 0.4449 | 0.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| 2,309 |
DrishtiSharma/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
anjandash/finetuned-bert-java-cmpx-v1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language:
- java
license: mit
datasets:
- giganticode/java-cmpx-v1
--- | 77 |
Anthos23/FS-finbert-fine-tuned-f1 | [
"negative",
"neutral",
"positive"
] | Entry not found | 15 |
mrm8488/spanish-TinyBERT-betito-finetuned-mnli | null | Entry not found | 15 |
aaraki/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.40967417350821667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5026
- Matthews Correlation: 0.4097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5335 | 1.0 | 535 | 0.5026 | 0.4097 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,705 |
Sarahliu186/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.548847644400088
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7415
- Matthews Correlation: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5273 | 1.0 | 535 | 0.5063 | 0.4092 |
| 0.3491 | 2.0 | 1070 | 0.4956 | 0.5259 |
| 0.2352 | 3.0 | 1605 | 0.6045 | 0.5301 |
| 0.1737 | 4.0 | 2140 | 0.7415 | 0.5488 |
| 0.1264 | 5.0 | 2675 | 0.8459 | 0.5466 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,999 |
markt23917/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8825396825396825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3351
- Accuracy: 0.8767
- F1: 0.8825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,522 |
cambridgeltl/guardian_news_distilbert-base-uncased | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
cambridgeltl/sst_electra_small | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
bitsanlp/Homophobia-Transphobia-v2-mBERT-EDA | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Homophobia-Transphobia-v2-mBERT-EDA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Homophobia-Transphobia-v2-mBERT-EDA
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- Accuracy: 0.9317
- F1: 0.4498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1699 | 1.0 | 189 | 0.4125 | 0.9229 | 0.4634 |
| 0.0387 | 2.0 | 378 | 0.4658 | 0.9229 | 0.3689 |
| 0.0148 | 3.0 | 567 | 0.5250 | 0.9355 | 0.4376 |
| 0.0005 | 4.0 | 756 | 0.5336 | 0.9317 | 0.4531 |
| 0.0016 | 5.0 | 945 | 0.5401 | 0.9317 | 0.4498 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
| 1,714 |
csclarke/MARS-Encoder | [
"LABEL_0"
] | ---
license: cc
---
# MARS Encoder for Multi-agent Response Selection
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class and is the model used in the paper [One Agent To Rule Them All: Towards Multi-agent Conversational AI](https://csclarke.com/assets/pdf/ACL_2022.pdf).
## Training Data
This model was trained on the [BBAI dataset](https://github.com/ChrisIsKing/black-box-multi-agent-integation/tree/main/data). The model will predict a score between 0 and 1 ranking the correctness of a response to a user question from a conversational agent.
## Usage and Performance
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('csclarke/MARS-Encoder')
scores = model.predict([('question 1', 'response 1'), ('question 1', 'response 2')])
```
The model will predict scores for the pairs `('question 1', 'response 1')` and `('question 1', 'response 2')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
| 1,168 |
clapika2010/rayyan_predictions | null | Entry not found | 15 |
ScandinavianMrT/distilbert-SARC_withcontext_3.0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-SARC_withcontext_3.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-SARC_withcontext_3.0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,045 |
ScandinavianMrT/distilbert-IMDB-NEG | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB-NEG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB-NEG
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1871
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1865 | 1.0 | 2000 | 0.1871 | 0.9346 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,342 |
Rustem/roberta-base-best | [
"LABEL_0"
] | Entry not found | 15 |
ronykroy/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222310284051585
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8454 | 1.0 | 250 | 0.3308 | 0.8975 | 0.8937 |
| 0.2561 | 2.0 | 500 | 0.2334 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,805 |
madhav16/finetuning-sentiment-model-3000-samples | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8662420382165605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Accuracy: 0.86
- F1: 0.8662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,505 |
Ketzu/koelectra-sts-v0.6 | [
"LABEL_0"
] | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.6
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8698381401893762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.6
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0059
- Pearson: 0.9988
- Spearmanr: 0.8698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|
| 0.0036 | 1.0 | 6250 | 0.0082 | 0.9983 | 0.8698 |
| 0.0038 | 2.0 | 12500 | 0.0065 | 0.9986 | 0.8697 |
| 0.0105 | 3.0 | 18750 | 0.0071 | 0.9985 | 0.8698 |
| 0.0008 | 4.0 | 25000 | 0.0059 | 0.9988 | 0.8698 |
| 0.0008 | 5.0 | 31250 | 0.0059 | 0.9988 | 0.8698 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,829 |
doctorlan/autonlp-ctrip-653519223 | [
"0",
"1"
] | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-ctrip
co2_eq_emissions: 24.879856894708393
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653519223
- CO2 Emissions (in grams): 24.879856894708393
## Validation Metrics
- Loss: 0.14671853184700012
- Accuracy: 0.9676666666666667
- Precision: 0.9794159885112494
- Recall: 0.9742857142857143
- AUC: 0.9901396825396825
- F1: 0.9768441155407017
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-ctrip-653519223
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | 1,149 |
Messerschmitt/is712 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: afl-3.0
---
| 28 |
leonistor/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.917
- name: F1
type: f1
value: 0.9169673644092439
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2371
- Accuracy: 0.917
- F1: 0.9170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8733 | 1.0 | 250 | 0.3537 | 0.8955 | 0.8911 |
| 0.273 | 2.0 | 500 | 0.2371 | 0.917 | 0.9170 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,799 |
leixu/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9213209184585894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.921
- F1: 0.9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8242 | 1.0 | 250 | 0.3108 | 0.9125 | 0.9107 |
| 0.2478 | 2.0 | 500 | 0.2183 | 0.921 | 0.9213 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,798 |
ScandinavianMrT/distilbert_ONION_3epoch | null | Entry not found | 15 |
ademarcarneiro/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.