modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
EhsanAghazadeh/bert-large-uncased-CoLA_A | null | Entry not found | 15 |
EhsanAghazadeh/bert-large-uncased-CoLA_B | null | Entry not found | 15 |
Elron/bleurt-large-128 | [
"LABEL_0"
] | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([ 0.0020, -0.6647])
```
| 1,003 |
ItcastAI/bert_cn_finetunning | null | Entry not found | 15 |
NDugar/1epochv3 | [
"contradiction",
"entailment",
"neutral"
] | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` | 4,788 |
RameshArvind/roberta_long_answer_nq | [
"LABEL_0"
] | Entry not found | 15 |
Roberta55/deberta-base-mnli-finetuned-cola | [
"CONTRADICTION",
"ENTAILMENT",
"NEUTRAL"
] | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: deberta-base-mnli-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6281691768918801
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-mnli-finetuned-cola
This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8205
- Matthews Correlation: 0.6282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4713 | 1.0 | 535 | 0.5110 | 0.5797 |
| 0.2678 | 2.0 | 1070 | 0.6648 | 0.5154 |
| 0.1811 | 3.0 | 1605 | 0.6681 | 0.6121 |
| 0.113 | 4.0 | 2140 | 0.8205 | 0.6282 |
| 0.0831 | 5.0 | 2675 | 1.0413 | 0.6057 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| 1,988 |
andi611/distilbert-base-uncased-ner-agnews | [
"Business",
"Sci/Tech",
"Sports",
"World"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-agnews
results:
- dataset:
name: ag_news
type: ag_news
args: default
metric:
name: Accuracy
type: accuracy
value: 0.9473684210526315
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-agnews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1916 | 1.0 | 3375 | 0.1741 | 0.9412 |
| 0.123 | 2.0 | 6750 | 0.1631 | 0.9483 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
| 1,653 |
appleternity/scibert-uncased-finetuned-coda19 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
boychaboy/MNLI_bert-base-cased_4 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
boychaboy/MNLI_roberta-large | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
boychaboy/kobias_v2_klue-roberta-base | [
"biased",
"none"
] | Entry not found | 15 |
celtics1863/env-bert-cls-chinese | [
"环境影响评价与管理",
"碳排放控制",
"水污染与控制",
"大气污染与控制",
"土壤污染与控制",
"环境生态",
"固体废物",
"环境毒理与健康",
"环境微生物",
"环境政策与经济"
] | ---
language:
- zh
tags:
- bert
- pytorch
- environment
- multi-class
- classification
---
中文环境文本分类模型,1.6M的数据集,在env-bert-chinese上进行fine-tuning。
分为环境影响评价与控制、碳排放控制、水污染控制、大气污染控制、土壤污染控制、环境生态、固体废物、环境毒理与健康、环境微生物、环境政策与经济10类。
项目正在进行中,后续会陆续更新相关内容。
清华大学环境学院课题组
有相关需求、建议,联系bi.huaibin@foxmail.com | 293 |
emrecan/convbert-base-turkish-mc4-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
| 332 |
michaelrglass/albert-base-rci-tabmcq-row | null | Entry not found | 15 |
monologg/koelectra-v3-klue-sts | [
"LABEL_0"
] | Entry not found | 15 |
moussaKam/frugalscore_tiny_bert-base_mover-score | [
"LABEL_0"
] | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | | 2,592 |
sagittariusA/gender_classifier_cs | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
shahrukhx01/bert-multitask-query-classifiers | null | # A Multi-task learning model with two prediction heads
* One prediction head classifies between keyword sentences vs statements/questions
* Other prediction head corresponds to classifier for statements vs questions
## Scores
##### Spaadia SQuaD Test acc: **0.9891**
##### Quora Keyword Pairs Test acc: **0.98048**
## Datasets:
Quora Keyword Pairs: https://www.kaggle.com/stefanondisponibile/quora-question-keyword-pairs
Spaadia SQuaD pairs: https://www.kaggle.com/shahrukhkhan/questions-vs-statementsclassificationdataset
## Article
[Medium article](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)
## Demo Notebook
[Colab Notebook Multi-task Query classifiers](https://colab.research.google.com/drive/1R7WcLHxDsVvZXPhr5HBgIWa3BlSZKY6p?usp=sharing)
## Clone the model repo
```bash
git clone https://huggingface.co/shahrukhx01/bert-multitask-query-classifiers
```
```python
%cd bert-multitask-query-classifiers/
```
## Load model
```python
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
```
## Run inference on both Tasks
```python
from multitask_model import BertForSequenceClassification
from transformers import AutoTokenizer
import torch
model = BertForSequenceClassification.from_pretrained(
"shahrukhx01/bert-multitask-query-classifiers",
task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
)
tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
## Keyword vs Statement/Question Classifier
input = ["keyword query", "is this a keyword query?"]
task_name="quora_keyword_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
for input, prediction in zip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
## Statement vs Question Classifier
input = ["where is berlin?", "is this a keyword query?", "Berlin is in Germany."]
task_name="spaadia_squad_pairs"
sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
logits = model(sequence, task_name=task_name)[0]
predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
for input, prediction in zip(input, predictions):
print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
print()
``` | 2,813 |
mrm8488/electricidad-small-finetuned-amazon-review-classification | [
"⭐",
"⭐⭐",
"⭐⭐⭐",
"⭐⭐⭐⭐",
"⭐⭐⭐⭐⭐"
] | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
widget:
- text: "me parece muy mal , se salía el producto por la caja y venían vacios , lo devolvere"
- text: "Correa de buena calidad, con un interior oscuro. Cumple perfectamente su función y se intercambia fácilmente. Una buena opción para cambiar el aspecto del reloj"
- text: "cumple su cometido sin nada que merezca la pena destacar"
metrics:
- accuracy
model-index:
- name: electricidad-small-finetuned-amazon-review-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5832
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-amazon-review-classification
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9506
- Accuracy: 0.5832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0258 | 1.0 | 6250 | 1.0209 | 0.5502 |
| 0.9668 | 2.0 | 12500 | 0.9960 | 0.565 |
| 0.953 | 3.0 | 18750 | 0.9802 | 0.5704 |
| 0.9201 | 4.0 | 25000 | 0.9831 | 0.567 |
| 0.902 | 5.0 | 31250 | 0.9814 | 0.5672 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6 | 2,317 |
RuudVelo/dutch_news_clf_bert_finetuned | [
"Binnenland",
"Buitenland",
"Cultuur & Media",
"Economie",
"Koningshuis",
"Opmerkelijk",
"Politiek",
"Regionaal nieuws",
"Tech"
] | Entry not found | 15 |
Stremie/bert-base-uncased-clickbait-keywords | null | This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText' + '[SEP]' + 'targetKeywords'. Achieved ~0.7 F1-score on test data. | 261 |
V3RX2000/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247142990809298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8812 | 1.0 | 250 | 0.3301 | 0.906 | 0.9035 |
| 0.2547 | 2.0 | 500 | 0.2285 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
clapika2010/soccer_predictions | null | Entry not found | 15 |
PaulTran/vietnamese_essay_identify | [
"Nghị luận",
"Biểu cảm",
"Miêu tả",
"Tự sự",
"Thuyết minh"
] | ---
language:
- vi
- Vietnamese
tags:
- essay category
- text-classification
widget:
- text: "Cái đồng hồ của em cao hơn 30 cm. Đế của nó được làm bằng i-nốc sáng loáng hình bầu dục. Chỗ dài nhất của đế vừa bằng gang tay của em. Chỗ rộng nhất bằng hơn nửa gang tay."
example_title: "Descriptive - Miêu tả"
- text: "Hiện nay, đại dịch Covid-19 diễn biến ngày một phức tạp, nó khiến nền kinh tế trì trệ, cuộc sống con người hoàn toàn xáo trộn và luôn ở trạng thái lo ngại... và cùng với đó chính là việc học sinh - sinh viên không thể tới trường. Một trong những điều đáng lo ngại nhất khi tình hình dịch bệnh không biết bao giờ mới ổn định."
example_title: "Argumentative - Nghị luận"
- text: "Cấu tạo của chiếc kính gồm hai bộ phận chính là gọng kính và mắt kính. Gọng kính được làm bằng nhựa cao cấp hoặc kim loại quý. Gọng kính chia làm hai phần: phần khung để lắp mắt kính và phần gọng để đeo vào tai, nối với nhau bởi các ốc vít nhỏ, có thể mở ra, gập lại dễ dàng. Chất liệu để làm mắt kính là nhựa hoặc thủy tinh trong suốt. Gọng kính và mắt kính có nhiều hình dáng, màu sắc khác nhau."
example_title: "Expository - Thuyết minh"
- text: "Em yêu quý đào vì nó là loài cây đặc trưng của miền Bắc vào Tết đến xuân sang. Đào bình dị nhưng gắn liền với tuổi thơ em nồng nàn. Tuổi thơ đã từng khao khát nhà có một cây đào mộc mạc để háo hức vui tươi trong ngày Tết."
example_title: "Expressive - Biểu cảm"
- text: "Hắn vừa đi vừa chửi. Bao giờ cũng thế, cứ rượu xong là hắn chửi. Bắt đầu chửi trời, có hề gì? Trời có của riêng nhà nào? Rồi hắn chửi đời. Thế cũng chẳng sao: Đời là tất cả nhưng cũng chẳng là ai."
example_title: "Narrative - Tự sự"
---
This is a finetuned PhoBERT model for essay categories classification.
- At primary levels of education in Vietnam, students are introduced to 5 categories of essays:
- Argumentative - Nghị luận
- Expressive - Biểu cảm
- Descriptive - Miêu tả
- Narrative - Tự sự
- Expository - Thuyết minh
- This model will classify sentences into these 5 categories
The general architecture and experimental results of PhoBERT can be found in EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
} | 2,407 |
UT/BRTW | null | Entry not found | 15 |
HiTZ/A2T_RoBERTa_SMFA_WikiEvents-arg | [
"contradiction",
"entailment",
"neutral"
] | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` | 3,612 |
LACAI/roberta-large-adapted-PFG-progression | [
"LABEL_0"
] | ---
license: mit
---
Base model: [lacai/roberta-large-dialog-narrative](https://huggingface.co/lacai/roberta-large-dialog-narrative)
Fine tuned as a progression model (to predict the acceptability of a dialogue) on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019):
Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task.
This model inherits a special dialogue token `<d>` from its base model, which indicates the start of a dialogue utterance.
**Example input**: `<d>How are you?</s><d>Good! how about yourself?</s><d>Great. Would you like to donate today to help the children?</s>`
For more context and usage information see [https://github.rpi.edu/LACAI/dialogue-progression](https://github.rpi.edu/LACAI/dialogue-progression). | 975 |
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_42 | [
"0",
"1",
"2"
] | Entry not found | 15 |
Bryan0123/bert-hashtag-to-hashtag-20 | [
"#art",
"#beautiful",
"#bhfyp",
"#cute",
"#fashion",
"#fitness",
"#follow",
"#happy",
"#instagood",
"#instagram",
"#love",
"#music",
"#nature",
"#photo",
"#photography",
"#photooftheday",
"#picoftheday",
"#style",
"#travel",
"#travelphotography"
] | Entry not found | 15 |
connectivity/bert_ft_qqp-14 | null | Entry not found | 15 |
Rebreak/autotrain-News-916530070 | [
"0",
"1"
] | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Rebreak/autotrain-data-News
co2_eq_emissions: 62.61326668998836
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 916530070
- CO2 Emissions (in grams): 62.61326668998836
## Validation Metrics
- Loss: 0.0855042040348053
- Accuracy: 0.9773220921733938
- Precision: 0.673469387755102
- Recall: 0.014864864864864866
- AUC: 0.8605107881181646
- F1: 0.029087703834288235
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Rebreak/autotrain-News-916530070
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Rebreak/autotrain-News-916530070", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Rebreak/autotrain-News-916530070", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,154 |
YuryK/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.9332773351360893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- Accuracy: 0.933
- F1: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8058 | 1.0 | 250 | 0.2778 | 0.917 | 0.9158 |
| 0.2124 | 2.0 | 500 | 0.1907 | 0.926 | 0.9262 |
| 0.1473 | 3.0 | 750 | 0.1669 | 0.933 | 0.9333 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,870 |
chkla/parlbert-topic-german | [
"Macroeconomics",
"Civil",
"Law",
"Social",
"Housing",
"Domestic",
"Defense",
"Technology",
"Foreign",
"International",
"Government",
"Public",
"Health",
"Culture",
"Agriculture",
"Labor",
"Education",
"Environment",
"Energy",
"Immigration",
"Transportation"
] | ---
language: german
---
### Welcome to ParlBERT-Topic-German!
🏷 **Model description**
This model was trained on \~10k manually annotated interpellations (📚 [Breunig/ Schnatterer 2019](https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198835332.001.0001/oso-9780198835332)) with topics from the [Comparative Agendas Project](https://www.comparativeagendas.net/datasets_codebooks) to classify text into one of twenty labels (annotation codebook).
_Note: "Interpellation is a formal request of a parliament to the respective government."([Wikipedia](https://en.wikipedia.org/wiki/Interpellation_(politics)))_
🗃 **Dataset**
| party | speeches | tokens |
|----|----|----|
| CDU/CSU | 7,635 | 4,862,654 |
| SPD | 5,321 | 3,158,315 |
| AfD | 3,465 | 1,844,707 |
| FDP | 3,067 | 1,593,108 |
| The Greens | 2,866 | 1,522,305 |
| The Left | 2,671 | 1,394,089 |
| cross-bencher | 200 | 86,170 |
🏃🏼♂️**Model training**
**ParlBERT-Topic-German** was fine-tuned on a domain adapted model (GermanBERT fine-tuned on [DeuParl](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2889?show=full)) for topic modeling with an interpellations dataset (📚 [Breunig/ Schnatterer 2019](https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198835332.001.0001/oso-9780198835332)) from the [Comparative Agendas Project](https://www.comparativeagendas.net/datasets_codebooks).
🤖 **Use**
```python
from transformers import pipeline
pipeline_classification_topics = pipeline("text-classification", model="chkla/parlbert-topics-german", tokenizer="bert-base-german-cased", return_all_scores=False)
text = "Sachgebiet Ausschließliche Gesetzgebungskompetenz des Bundes über die Zusammenarbeit des Bundes und der Länder zum Schutze der freiheitlichen demokratischen Grundordnung, des Bestandes und der Sicherheit des Bundes oder eines Landes Wir fragen die Bundesregierung"
pipeline_classification_topics(text) # Government
```
📊 **Evaluation**
The model was evaluated on an evaluation set (20%):
| Label | F1 | support |
|----|----|----|
| International | 80.0 | 1,126 |
| Defense | 85.0 | 1,099 |
| Government | 71.3 | 989 |
| Civil Rights | 76.5 | 978 |
| Environment | 76.6 | 845 |
| Transportation | 86.0 | 800 |
| Law & Crime | 67.1 | 492 |
| Energy | 78.6 | 424 |
| Health | 78.2 | 418 |
| Domestic Com. | 64.4 | 382 |
| Immigration | 81.0 | 376 |
| Labor | 69.1 | 344 |
| Macroeconom. | 62.8 | 339 |
| Agriculture | 76.3 | 292 |
| Social Welfare | 49.2 | 253 |
| Technology | 63.0 | 252 |
| Education | 71.6 | 183 |
| Housing | 79.6 | 178 |
| Foreign Trade | 61.5 | 139 |
| Culture | 54.6 | 69 |
| Public Lands | 45.4 | 55 |
⚠️ **Limitations**
Models are often highly topic dependent. Therefore, the model may perform less well on different topics and text types not included in the training set.
👥 **Cite**
```
@article{klamm2022frameast,
title={FrameASt: A Framework for Second-level Agenda Setting in Parliamentary Debates through the Lense of Comparative Agenda Topics},
author={Klamm, Christopher and Rehbein, Ines and Ponzetto, Simone},
journal={ParlaCLARIN III at LREC2022},
year={2022}
}
```
🐦 Twitter: [@chklamm](http://twitter.com/chklamm) | 3,190 |
davidcechak/DNADeberta_fine | null | Entry not found | 15 |
davidcechak/DNADeberta_fine_0.6394267984578837 | null | Entry not found | 15 |
davidcechak/DNADeberta_finedemo_coding_vs_intergenomic_seqs | null | Entry not found | 15 |
epomponio/finetuned-bert-model | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4"
] | Entry not found | 15 |
Sayan01/tiny-bert-qqp-distilled | [
"duplicate",
"not_duplicate"
] | Entry not found | 15 |
annahaz/xlm-roberta-base-finetuned-misogyny | [
"0",
"1"
] | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7913
- Accuracy: 0.8925
- F1: 0.8280
- Precision: 0.8240
- Recall: 0.8320
- Mae: 0.1075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.328 | 1.0 | 828 | 0.3477 | 0.8732 | 0.7831 | 0.8366 | 0.7359 | 0.1268 |
| 0.273 | 2.0 | 1656 | 0.2921 | 0.8910 | 0.8269 | 0.8171 | 0.8369 | 0.1090 |
| 0.2342 | 3.0 | 2484 | 0.3222 | 0.8834 | 0.8176 | 0.7965 | 0.8398 | 0.1166 |
| 0.2132 | 4.0 | 3312 | 0.3801 | 0.8852 | 0.8223 | 0.7933 | 0.8534 | 0.1148 |
| 0.1347 | 5.0 | 4140 | 0.5474 | 0.8955 | 0.8314 | 0.8346 | 0.8282 | 0.1045 |
| 0.1187 | 6.0 | 4968 | 0.5853 | 0.8886 | 0.8137 | 0.8475 | 0.7825 | 0.1114 |
| 0.0968 | 7.0 | 5796 | 0.6378 | 0.8916 | 0.8267 | 0.8223 | 0.8311 | 0.1084 |
| 0.0533 | 8.0 | 6624 | 0.7397 | 0.8831 | 0.8191 | 0.7899 | 0.8505 | 0.1169 |
| 0.06 | 9.0 | 7452 | 0.8112 | 0.8861 | 0.8224 | 0.7987 | 0.8476 | 0.1139 |
| 0.0287 | 10.0 | 8280 | 0.7913 | 0.8925 | 0.8280 | 0.8240 | 0.8320 | 0.1075 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2,469 |
anahitapld/robera-base-dbd | null | ---
license: apache-2.0
---
| 28 |
CenIA/distillbert-base-spanish-uncased-finetuned-mldoc | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3"
] | Entry not found | 15 |
EasthShin/Android_Ios_Classification | null | ## Bert-base-uncased for Android-Ios Question Classification
**Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Android-Ios-Classification-Workspace)
<br>
**Android-Ios-Classification DEMO**: [Ainize Endpoint](https://main-android-ios-classification-east-h-shin.endpoint.ainize.ai/)
<br>
**Demo web Code**: [Github](https://github.com/EastHShin/Android-Ios-Classification)
<br>
**Android-Ios-Classification API**: [Ainize API](https://ainize.ai/EastHShin/Android-Ios-Classification)
<br>
<br>
## Overview
**Language model**: bert-base-cased
<br>
**Language**: English
<br>
**Training data**: Question classification Android-Ios dataset from [Kaggle](https://www.kaggle.com/xhlulu/question-classification-android-or-ios)
## Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_path = "EasthShin/Android_Ios_Classification"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
classifier = pipeline('text-classification', model=model_path, tokenizer=tokenizer)
question = "I bought goodnote in Appstore"
result = dict()
result[0] = classifier(question)[0]
``` | 1,265 |
EhsanAghazadeh/xlnet-large-cased-CoLA_C | null | Entry not found | 15 |
Ivo/emscad-skill-extraction-conference | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
JonatanGk/roberta-base-bne-finetuned-hate-speech-offensive-spanish | [
"HATE_SPEECH",
"OFFENSIVE",
"NEITHER"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2869
- Accuracy: 0.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3222 | 1.0 | 1255 | 0.2869 | 0.9012 |
| 0.2418 | 2.0 | 2510 | 0.3125 | 0.8987 |
| 0.1726 | 3.0 | 3765 | 0.4120 | 0.8943 |
| 0.0685 | 4.0 | 5020 | 0.5239 | 0.8919 |
| 0.0245 | 5.0 | 6275 | 0.5910 | 0.8947 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,618 |
Li/bert-base-uncased-qnli | [
"entailment",
"not_entailment"
] | [bert-base-uncased](https://huggingface.co/bert-base-uncased) fine-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on 2x NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=8
gradient_accumulation_steps=2
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.916895
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli) | 1,529 |
Tymoteusz/distilbert-base-uncased-kaggle-readability | [
"LABEL_0"
] | Entry not found | 15 |
biu-nlp/superpal | [
"aligned",
"not_aligned"
] | ---
widget:
- text: "Prime Minister Hun Sen insisted that talks take place in Cambodia. </s><s> Cambodian leader Hun Sen rejected opposition parties' demands for talks outside the country."
---
# SuperPAL model
Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline
Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. [PDF](https://arxiv.org/pdf/2009.00590)
**How to use?**
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("biu-nlp/superpal")
model = AutoModelForSequenceClassification.from_pretrained("biu-nlp/superpal")
```
The original repo is [here](https://github.com/oriern/SuperPAL).
If you find our work useful, please cite the paper as:
```python
@inproceedings{ernst-etal-2021-summary,
title = "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline",
author = "Ernst, Ori and Shapira, Ori and Pasunuru, Ramakanth and Lepioshkin, Michael and Goldberger, Jacob and Bansal, Mohit and Dagan, Ido",
booktitle = "Proceedings of the 25th Conference on Computational Natural Language Learning",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.conll-1.25",
pages = "310--322"
}
``` | 1,394 |
boychaboy/MNLI_bert-base-cased_3 | [
"contradiction",
"entailment",
"neutral"
] | Entry not found | 15 |
coldfir3/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222116474112371
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8262 | 1.0 | 250 | 0.3073 | 0.904 | 0.9021 |
| 0.2484 | 2.0 | 500 | 0.2175 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1,805 |
emrecan/distilbert-base-turkish-cased-multinli_tr | [
"contradiction",
"entailment",
"neutral"
] | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
| 332 |
finiteautomata/bert-non-contextualized-hate-speech-es | [
"Hateful",
"Not hateful"
] | Entry not found | 15 |
gchhablani/bert-base-cased-finetuned-qqp | [
"duplicate",
"not_duplicate"
] | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9083848627256987
- name: F1
type: f1
value: 0.8767633750332712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qqp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3752
- Accuracy: 0.9084
- F1: 0.8768
- Combined Score: 0.8926
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.308 | 1.0 | 22741 | 0.2548 | 0.8925 | 0.8556 | 0.8740 |
| 0.201 | 2.0 | 45482 | 0.2881 | 0.9032 | 0.8698 | 0.8865 |
| 0.1416 | 3.0 | 68223 | 0.3752 | 0.9084 | 0.8768 | 0.8926 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 2,876 |
howey/electra-base-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
josephgatto/paint_doctor_speaker_identification | null | This model is a bert for sequence classification model fine-tuned on the MedDialogue dataset. Basically, the task is just to predict if a given sentence in the corpus was spoken by the patient or doctor. | 204 |
kornosk/bert-election2020-twitter-stance-biden | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
language: "en"
tags:
- twitter
- stance-detection
- election2020
- politics
license: "gpl-3.0"
---
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT)
Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Biden!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Biden is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
``` | 3,589 |
monologg/koelectra-base-finetuned-sentiment | [
"negative",
"positive"
] | Entry not found | 15 |
mrm8488/codebert2codebert-finetuned-code-defect-detection | null | Entry not found | 15 |
yoshitomo-matsubara/bert-base-uncased-sst2 | null | ---
language: en
tags:
- bert
- sst2
- glue
- torchdistill
license: apache-2.0
datasets:
- sst2
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on SST-2 dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
| 827 |
ctu-aic/xlm-roberta-large-xnli-csfever | [
"contradiction",
"entailment",
"neutral"
] | ---
license: cc-by-sa-3.0
---
| 33 |
celine98/canine-s-finetuned-sst2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: canine-s-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8577981651376146
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-sst2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5259
- Accuracy: 0.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 |
| 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 |
| 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 |
| 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 |
| 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,828 |
princeton-nlp/CoFi-MNLI-s95 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
| 435 |
samayash/finetuning-financial-news-sentiment | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-financial-news-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-financial-news-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- Accuracy: 0.8751
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 1,206 |
blacktree/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5285676961321106
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4883
- Matthews Correlation: 0.5286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5269 | 1.0 | 535 | 0.5197 | 0.4187 |
| 0.3477 | 2.0 | 1070 | 0.4883 | 0.5286 |
| 0.2333 | 3.0 | 1605 | 0.6530 | 0.5079 |
| 0.17 | 4.0 | 2140 | 0.7567 | 0.5272 |
| 0.1271 | 5.0 | 2675 | 0.8887 | 0.5259 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
| 1,999 |
Adrian/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.927345202022014
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2071
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8153 | 1.0 | 250 | 0.2942 | 0.9125 | 0.9102 |
| 0.2406 | 2.0 | 500 | 0.2071 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,799 |
Intel/xlm-roberta-base-mrpc | [
"equivalent",
"not_equivalent"
] | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.901023890784983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-mrpc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3703
- Accuracy: 0.8578
- F1: 0.9010
- Combined Score: 0.8794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,508 |
Zia/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9365
- name: F1
type: f1
value: 0.9366968648795959
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1707
- Accuracy: 0.9365
- F1: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0746 | 1.0 | 250 | 0.1932 | 0.9335 | 0.9330 |
| 0.0565 | 2.0 | 500 | 0.1774 | 0.939 | 0.9391 |
| 0.0539 | 3.0 | 750 | 0.1707 | 0.9365 | 0.9367 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,872 |
Alassea/glue_sst_classifier | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,993 |
Truefilter/bbase_go_emotions | [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"neutral",
"optimism",
"pride"... | Entry not found | 15 |
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_77 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_88 | [
"0",
"1",
"2",
"3",
"4"
] | Entry not found | 15 |
connectivity/bert_ft_qqp-15 | null | Entry not found | 15 |
Akshat/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9216312760504648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2246
- Accuracy: 0.922
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8424 | 1.0 | 250 | 0.3246 | 0.9025 | 0.8989 |
| 0.2533 | 2.0 | 500 | 0.2246 | 0.922 | 0.9216 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1,798 |
ericntay/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240722191505606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2055
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7795 | 1.0 | 250 | 0.2920 | 0.911 | 0.9079 |
| 0.2373 | 2.0 | 500 | 0.2055 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,804 |
YeRyeongLee/bert-large-uncased-finetuned-filtered-0602 | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-finetuned-filtered-0602
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-filtered-0602
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8409
- Accuracy: 0.1667
- F1: 0.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.8331 | 1.0 | 3180 | 1.8054 | 0.1667 | 0.0476 |
| 1.8158 | 2.0 | 6360 | 1.8196 | 0.1667 | 0.0476 |
| 1.8088 | 3.0 | 9540 | 1.8059 | 0.1667 | 0.0476 |
| 1.8072 | 4.0 | 12720 | 1.7996 | 0.1667 | 0.0476 |
| 1.8182 | 5.0 | 15900 | 1.7962 | 0.1667 | 0.0476 |
| 1.7993 | 6.0 | 19080 | 1.8622 | 0.1667 | 0.0476 |
| 1.7963 | 7.0 | 22260 | 1.8378 | 0.1667 | 0.0476 |
| 1.7956 | 8.0 | 25440 | 1.8419 | 0.1667 | 0.0476 |
| 1.7913 | 9.0 | 28620 | 1.8406 | 0.1667 | 0.0476 |
| 1.7948 | 10.0 | 31800 | 1.8409 | 0.1667 | 0.0476 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.12.1
| 2,103 |
course5i/SEAD-L-6_H-384_A-12-mnli | [
"0",
"1",
"2"
] | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- mnli
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-mnli
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **mnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_m-accuracy | eval_m-runtime | eval_m-samples_per_second | eval_m-steps_per_second | eval_m-loss | eval_m-samples | eval_mm-accuracy | eval_mm-runtime | eval_mm-samples_per_second | eval_mm-steps_per_second | eval_mm-loss | eval_mm-samples |
|:---------------:|:--------------:|:-------------------------:|:-----------------------:|:-----------:|:--------------:|:----------------:|:---------------:|:--------------------------:|:------------------------:|:------------:|:---------------:|
| 0.8495 | 6.5443 | 1499.776 | 46.911 | 0.4366 | 9815 | 0.8508 | 5.6975 | 1725.678 | 54.059 | 0.4252 | 9832 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
| 4,087 |
BM-K/KoMiniLM-68M | [
"0",
"1"
] | # KoMiniLM
🐣 Korean mini language model
## Overview
Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models.
## Quick tour
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM-68M") # 68M model
model = AutoModel.from_pretrained("BM-K/KoMiniLM-68M")
inputs = tokenizer("안녕 세상아!", return_tensors="pt")
outputs = model(**inputs)
```
## Update history
** Updates on 2022.06.20 **
- Release KoMiniLM-bert-68M
** Updates on 2022.05.24 **
- Release KoMiniLM-bert-23M
## Pre-training
`Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE)
### Object
Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]](https://arxiv.org/abs/2002.10957) were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project.
### Data sets
|Data|News comments|News article|
|:----:|:----:|:----:|
|size|10G|10G|
### Config
- **KoMiniLM-68M**
```json
{
"architectures": [
"BertForPreTraining"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"output_attentions": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"return_dict": false,
"torch_dtype": "float32",
"transformers_version": "4.13.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 32000
}
```
### Performance on subtasks
- The results of our fine-tuning experiments are an average of 3 runs for each task.
```
cd KoMiniLM-Finetune
bash scripts/run_all_kominilm.sh
```
|| #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 |
|KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 |
|KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 |
|DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 |
| | | | | | | | | |
|**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 |
|**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 |
- [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus)
- [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018)
- [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding)
- [Question Pair](https://github.com/songys/Question_pair) (Paired Question)
- [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset)
<img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br>
### User Contributed Examples
-
## Reference
- [KLUE BERT](https://github.com/KLUE-benchmark/KLUE)
- [KcBERT](https://github.com/Beomi/KcBERT)
- [SKT KoBERT](https://github.com/SKTBrain/KoBERT)
- [DistilKoBERT](https://github.com/monologg/DistilKoBERT)
- [lassl](https://github.com/lassl/lassl) | 4,250 |
armandnlp/distilbert-base-uncased-finetuned-emotion | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9273822408882375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8643 | 1.0 | 250 | 0.3324 | 0.9065 | 0.9025 |
| 0.2589 | 2.0 | 500 | 0.2237 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,807 |
deepesh0x/autotrain-GlueFineTunedModel-1013533786 | [
"negative",
"positive"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-GlueFineTunedModel
co2_eq_emissions: 57.79463560530838
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1013533786
- CO2 Emissions (in grams): 57.79463560530838
## Validation Metrics
- Loss: 0.18257243931293488
- Accuracy: 0.9261538461538461
- Precision: 0.9244319632371713
- Recall: 0.9282235324275827
- AUC: 0.9800523984255356
- F1: 0.92632386799693
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-GlueFineTunedModel-1013533786
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-GlueFineTunedModel-1013533786", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-GlueFineTunedModel-1013533786", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,219 |
Farshid/finetuning-finetuned-financial-phrasebank-75 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: finetuning-finetuned-financial-phrasebank-75
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_75agree
metrics:
- name: Accuracy
type: accuracy
value: 0.9217391304347826
- name: F1
type: f1
value: 0.9222750587883506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-finetuned-financial-phrasebank-75
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5900
- Accuracy: 0.9217
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6566 | 1.0 | 22 | 0.4940 | 0.8435 | 0.8448 |
| 0.4136 | 2.0 | 44 | 0.2829 | 0.9188 | 0.9195 |
| 0.2286 | 3.0 | 66 | 0.2404 | 0.9159 | 0.9159 |
| 0.1389 | 4.0 | 88 | 0.2527 | 0.9275 | 0.9280 |
| 0.0921 | 5.0 | 110 | 0.2555 | 0.9333 | 0.9336 |
| 0.0634 | 6.0 | 132 | 0.2987 | 0.9159 | 0.9168 |
| 0.0441 | 7.0 | 154 | 0.3032 | 0.9188 | 0.9198 |
| 0.0354 | 8.0 | 176 | 0.3167 | 0.9246 | 0.9253 |
| 0.0184 | 9.0 | 198 | 0.3296 | 0.9217 | 0.9220 |
| 0.0182 | 10.0 | 220 | 0.3284 | 0.9304 | 0.9305 |
| 0.0119 | 11.0 | 242 | 0.3513 | 0.9217 | 0.9223 |
| 0.008 | 12.0 | 264 | 0.4218 | 0.9217 | 0.9225 |
| 0.0068 | 13.0 | 286 | 0.4115 | 0.9188 | 0.9197 |
| 0.0077 | 14.0 | 308 | 0.4209 | 0.9159 | 0.9169 |
| 0.0037 | 15.0 | 330 | 0.4120 | 0.9217 | 0.9220 |
| 0.0026 | 16.0 | 352 | 0.4134 | 0.9159 | 0.9161 |
| 0.0026 | 17.0 | 374 | 0.4230 | 0.9217 | 0.9221 |
| 0.0025 | 18.0 | 396 | 0.4335 | 0.9275 | 0.9278 |
| 0.0016 | 19.0 | 418 | 0.4538 | 0.9188 | 0.9187 |
| 0.0022 | 20.0 | 440 | 0.4518 | 0.9188 | 0.9192 |
| 0.0011 | 21.0 | 462 | 0.4653 | 0.9159 | 0.9167 |
| 0.0011 | 22.0 | 484 | 0.4713 | 0.9159 | 0.9167 |
| 0.0008 | 23.0 | 506 | 0.4585 | 0.9217 | 0.9223 |
| 0.0007 | 24.0 | 528 | 0.4525 | 0.9275 | 0.9278 |
| 0.0007 | 25.0 | 550 | 0.4582 | 0.9304 | 0.9307 |
| 0.0009 | 26.0 | 572 | 0.4689 | 0.9275 | 0.9279 |
| 0.0006 | 27.0 | 594 | 0.4783 | 0.9275 | 0.9279 |
| 0.0019 | 28.0 | 616 | 0.4784 | 0.9275 | 0.9279 |
| 0.0013 | 29.0 | 638 | 0.4884 | 0.9246 | 0.9253 |
| 0.0006 | 30.0 | 660 | 0.5065 | 0.9217 | 0.9217 |
| 0.0036 | 31.0 | 682 | 0.4800 | 0.9246 | 0.9253 |
| 0.0007 | 32.0 | 704 | 0.4643 | 0.9304 | 0.9305 |
| 0.0009 | 33.0 | 726 | 0.4633 | 0.9275 | 0.9279 |
| 0.0004 | 34.0 | 748 | 0.4787 | 0.9275 | 0.9279 |
| 0.0004 | 35.0 | 770 | 0.4912 | 0.9217 | 0.9224 |
| 0.0004 | 36.0 | 792 | 0.4693 | 0.9246 | 0.9247 |
| 0.0004 | 37.0 | 814 | 0.4962 | 0.9246 | 0.9251 |
| 0.0004 | 38.0 | 836 | 0.5034 | 0.9246 | 0.9251 |
| 0.0003 | 39.0 | 858 | 0.5096 | 0.9188 | 0.9197 |
| 0.0003 | 40.0 | 880 | 0.5065 | 0.9246 | 0.9252 |
| 0.0003 | 41.0 | 902 | 0.4894 | 0.9246 | 0.9244 |
| 0.0005 | 42.0 | 924 | 0.5419 | 0.9159 | 0.9168 |
| 0.0016 | 43.0 | 946 | 0.5230 | 0.9217 | 0.9225 |
| 0.0003 | 44.0 | 968 | 0.5272 | 0.9159 | 0.9169 |
| 0.0003 | 45.0 | 990 | 0.4794 | 0.9275 | 0.9275 |
| 0.0003 | 46.0 | 1012 | 0.5131 | 0.9217 | 0.9223 |
| 0.0005 | 47.0 | 1034 | 0.5256 | 0.9246 | 0.9242 |
| 0.0004 | 48.0 | 1056 | 0.5571 | 0.9159 | 0.9168 |
| 0.0003 | 49.0 | 1078 | 0.5412 | 0.9246 | 0.9252 |
| 0.0005 | 50.0 | 1100 | 0.5465 | 0.9217 | 0.9225 |
| 0.0013 | 51.0 | 1122 | 0.5324 | 0.9333 | 0.9337 |
| 0.0002 | 52.0 | 1144 | 0.5284 | 0.9333 | 0.9337 |
| 0.0002 | 53.0 | 1166 | 0.5301 | 0.9304 | 0.9308 |
| 0.0002 | 54.0 | 1188 | 0.5317 | 0.9275 | 0.9280 |
| 0.0002 | 55.0 | 1210 | 0.5476 | 0.9246 | 0.9252 |
| 0.001 | 56.0 | 1232 | 0.5277 | 0.9333 | 0.9335 |
| 0.0002 | 57.0 | 1254 | 0.5387 | 0.9246 | 0.9251 |
| 0.0005 | 58.0 | 1276 | 0.5505 | 0.9246 | 0.9253 |
| 0.0006 | 59.0 | 1298 | 0.5400 | 0.9304 | 0.9306 |
| 0.0022 | 60.0 | 1320 | 0.5788 | 0.9159 | 0.9169 |
| 0.0002 | 61.0 | 1342 | 0.5504 | 0.9275 | 0.9277 |
| 0.0003 | 62.0 | 1364 | 0.5686 | 0.9275 | 0.9275 |
| 0.0002 | 63.0 | 1386 | 0.5653 | 0.9159 | 0.9165 |
| 0.0002 | 64.0 | 1408 | 0.5700 | 0.9188 | 0.9194 |
| 0.0002 | 65.0 | 1430 | 0.5705 | 0.9188 | 0.9194 |
| 0.0002 | 66.0 | 1452 | 0.5687 | 0.9159 | 0.9165 |
| 0.0003 | 67.0 | 1474 | 0.5971 | 0.9159 | 0.9168 |
| 0.0002 | 68.0 | 1496 | 0.5979 | 0.9188 | 0.9196 |
| 0.0009 | 69.0 | 1518 | 0.5905 | 0.9217 | 0.9223 |
| 0.0002 | 70.0 | 1540 | 0.5845 | 0.9188 | 0.9192 |
| 0.0003 | 71.0 | 1562 | 0.5942 | 0.9217 | 0.9223 |
| 0.0002 | 72.0 | 1584 | 0.5948 | 0.9217 | 0.9223 |
| 0.0002 | 73.0 | 1606 | 0.5943 | 0.9217 | 0.9223 |
| 0.0006 | 74.0 | 1628 | 0.5931 | 0.9217 | 0.9223 |
| 0.0002 | 75.0 | 1650 | 0.5927 | 0.9217 | 0.9223 |
| 0.0002 | 76.0 | 1672 | 0.5940 | 0.9217 | 0.9223 |
| 0.0002 | 77.0 | 1694 | 0.5937 | 0.9217 | 0.9223 |
| 0.0002 | 78.0 | 1716 | 0.5911 | 0.9217 | 0.9223 |
| 0.0006 | 79.0 | 1738 | 0.5900 | 0.9217 | 0.9223 |
| 0.0002 | 80.0 | 1760 | 0.5900 | 0.9217 | 0.9223 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
| 7,425 |
projecte-aina/roberta-base-ca-v2-cased-te | [
"ENTAILMENT",
"NEUTRAL",
"CONTRADICTION"
] | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "textual entailment"
- "teca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/teca"
metrics:
- "accuracy"
model-index:
- name: roberta-base-ca-v2-cased-te
results:
- task:
type: text-classification # Required. Example: automatic-speech-recognition
dataset:
type: projecte-aina/teca
name: TECA
metrics:
- name: Accuracy
type: accuracy
value: 0.8342
widget:
- text: "M'agrades. T'estimo."
- text: "M'agrada el sol i la calor. A la Garrotxa plou molt."
- text: "El llibre va caure per la finestra. El llibre va sortir volant."
- text: "El meu aniversari és el 23 de maig. Faré anys a finals de maig."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment.
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-te** model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases.
## How to Use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-te")
example = "M'agrada el sol i la calor. </s></s> A la Garrotxa plou molt."
te_results = nlp(example)
pprint(te_results)
```
## Training
### Training data
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/teca) for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing accuracy.
## Evaluation results
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
| Model | TECA (Accuracy) |
| ------------|:----|
| roberta-base-ca-v2-cased-te | **83.14** |
| BERTa | 79.26 |
| mBERT | 74.63 |
| XLM-RoBERTa | 33.30 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
| 4,710 |
Manishkalra/discourse_classification | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: discourse_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discourse_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7639
- Accuracy: 0.6649
- F1: 0.6649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7565 | 1.0 | 1839 | 0.7589 | 0.6635 | 0.6635 |
| 0.6693 | 2.0 | 3678 | 0.7639 | 0.6649 | 0.6649 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,468 |
sssingh/distilbert-base-uncased-emotion-finetuned | [
"sadness",
"joy",
"love",
"anger",
"fear",
"surprise"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-emotion-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9350215566385567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-emotion-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Acc: 0.935
- F1: 0.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|
| 0.1734 | 1.0 | 250 | 0.1624 | 0.928 | 0.9279 |
| 0.1187 | 2.0 | 500 | 0.1518 | 0.935 | 0.9350 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 1,715 |
srini98/distilbert_finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert_finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7799
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2788 | 0.7371 |
| 3.7785 | 2.0 | 636 | 1.8739 | 0.8358 |
| 3.7785 | 3.0 | 954 | 1.1618 | 0.8923 |
| 1.6926 | 4.0 | 1272 | 0.8647 | 0.9090 |
| 0.9104 | 5.0 | 1590 | 0.7799 | 0.9161 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.11.6
| 1,857 |
erickdp/sentiment-analysisi-distillbert-es | [
"0",
"1",
"2"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- erickdp/autotrain-data-sentiment-analysis-distillbert-es
co2_eq_emissions: 4.070674106910222
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1164342966
- CO2 Emissions (in grams): 4.070674106910222
## Validation Metrics
- Loss: 0.5068035125732422
- Accuracy: 0.8406482106684673
- Macro F1: 0.8355269443836222
- Micro F1: 0.8406482106684673
- Weighted F1: 0.8423675674232264
- Macro Precision: 0.8364960686615248
- Micro Precision: 0.8406482106684673
- Weighted Precision: 0.8455742631643787
- Macro Recall: 0.8361938729437037
- Micro Recall: 0.8406482106684673
- Weighted Recall: 0.8406482106684673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/erickdp/autotrain-sentiment-analysis-distillbert-es-1164342966
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("erickdp/autotrain-sentiment-analysis-distillbert-es-1164342966", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("erickdp/autotrain-sentiment-analysis-distillbert-es-1164342966", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,487 |
18811449050/bert_cn_finetuning | [
"LABEL_0",
"LABEL_1"
] | Entry not found | 15 |
BearThreat/distilbert-base-uncased-finetuned-cola | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.533214904586951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5774
- Matthews Correlation: 0.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2347 | 1.0 | 535 | 0.5774 | 0.5332 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| 1,702 |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | [
"Algeria",
"Bahrain",
"Djibouti",
"Egypt",
"Iraq",
"Jordan",
"Kuwait",
"Lebanon",
"Libya",
"Mauritania",
"Morocco",
"Oman",
"Palestine",
"Qatar",
"Saudi_Arabia",
"Somalia",
"Sudan",
"Syria",
"Tunisia",
"United_Arab_Emirates",
"Yemen"
] | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID NADI Model
## Model description
**CAMeLBERT-MSA DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.9242768287658691},
{'label': 'Saudi_Arabia', 'score': 0.3400847613811493}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 2,952 |
EhsanAghazadeh/xlnet-large-cased-CoLA_A | null | Entry not found | 15 |
EhsanAghazadeh/xlnet-large-cased-CoLA_B | null | Entry not found | 15 |
JIWON/bert-base-finetuned-nli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model-index:
- name: bert-base-finetuned-nli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: nli
metrics:
- name: Accuracy
type: accuracy
value: 0.085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6210
- Accuracy: 0.085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.6210 | 0.085 |
| No log | 2.0 | 392 | 0.5421 | 0.0643 |
| 0.5048 | 3.0 | 588 | 0.5523 | 0.062 |
| 0.5048 | 4.0 | 784 | 0.5769 | 0.0533 |
| 0.5048 | 5.0 | 980 | 0.5959 | 0.052 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 1,787 |
JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector | [
"AGAINST",
"FAVOR",
"NEUTRAL"
] | ---
license: apache-2.0
language: es
tags:
- "spanish"
datasets:
- catalonia_independence
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: catalonia_independence
type: catalonia_independence
args: spanish
metrics:
- name: Accuracy
type: accuracy
value: 0.7880893300248138
widget:
- text: "Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
- text: "Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
---
# roberta-base-bne-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Accuracy: 0.7881
<details>
## Model description
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 378 | 0.5534 | 0.7558 |
| 0.6089 | 2.0 | 756 | 0.5315 | 0.7643 |
| 0.2678 | 3.0 | 1134 | 0.7336 | 0.7816 |
| 0.0605 | 4.0 | 1512 | 0.8809 | 0.7866 |
| 0.0605 | 5.0 | 1890 | 0.9415 | 0.7881 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9936726093292236}]
independence_analysis(
"El desafío independentista queda adormecido, y eso que el Gobierno ha sido muy claro en que su propuesta para Cataluña es una agenda de reencuentro, centrada en inversiones e infraestructuras")
# Output:
[{'label': 'AGAINST', 'score': 0.7508948445320129}]
independence_analysis(
"Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.9966907501220703}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(SPANISH).ipynb#scrollTo=uNMOXJz38W6U)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | 3,685 |
TehranNLP/xlnet-base-cased-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
TehranNLP-org/bert-base-uncased-cls-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Tommy930/distilbert-base-uncased-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.9193144250513821
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.919
- F1: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7858 | 1.0 | 250 | 0.3034 | 0.9085 | 0.9073 |
| 0.243 | 2.0 | 500 | 0.2220 | 0.919 | 0.9193 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 1,805 |
ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argmining | [
"NEGATIVE",
"POSITIVE"
] | Entry not found | 15 |
anirudh21/albert-base-v2-finetuned-rte | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.7581227436823105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-rte
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2496
- Accuracy: 0.7581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 249 | 0.5914 | 0.6751 |
| No log | 2.0 | 498 | 0.5843 | 0.7184 |
| 0.5873 | 3.0 | 747 | 0.6925 | 0.7220 |
| 0.5873 | 4.0 | 996 | 1.1613 | 0.7545 |
| 0.2149 | 5.0 | 1245 | 1.2496 | 0.7581 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 1,829 |
anirudh21/albert-large-v2-finetuned-mnli | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
baykenney/bert-large-gpt2detector-topp96 | [
"Human",
"Machine"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.